00:00:00.000 Started by upstream project "autotest-per-patch" build number 126189 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 23948 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.123 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.123 The recommended git tool is: git 00:00:00.123 using credential 00000000-0000-0000-0000-000000000002 00:00:00.125 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.153 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.182 Using shallow fetch with depth 1 00:00:00.182 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.182 > git --version # timeout=10 00:00:00.208 > git --version # 'git version 2.39.2' 00:00:00.209 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.228 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/56/22956/10 # timeout=5 00:00:05.874 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.885 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.896 Checking out Revision d49304e16352441ae7eebb2419125dd094201f3e (FETCH_HEAD) 00:00:05.896 > git config core.sparsecheckout # timeout=10 00:00:05.908 > git read-tree -mu HEAD # timeout=10 00:00:05.926 > git checkout -f d49304e16352441ae7eebb2419125dd094201f3e # timeout=5 00:00:05.960 Commit message: "jenkins/jjb-config: Add ubuntu2404 to per-patch and nightly testing" 00:00:05.960 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.108 [Pipeline] Start of Pipeline 00:00:06.122 [Pipeline] library 00:00:06.123 Loading library shm_lib@master 00:00:06.124 Library shm_lib@master is cached. Copying from home. 00:00:06.139 [Pipeline] node 00:00:06.147 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.150 [Pipeline] { 00:00:06.161 [Pipeline] catchError 00:00:06.163 [Pipeline] { 00:00:06.180 [Pipeline] wrap 00:00:06.192 [Pipeline] { 00:00:06.203 [Pipeline] stage 00:00:06.205 [Pipeline] { (Prologue) 00:00:06.235 [Pipeline] echo 00:00:06.237 Node: VM-host-SM17 00:00:06.245 [Pipeline] cleanWs 00:00:06.256 [WS-CLEANUP] Deleting project workspace... 00:00:06.256 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.261 [WS-CLEANUP] done 00:00:06.454 [Pipeline] setCustomBuildProperty 00:00:06.527 [Pipeline] httpRequest 00:00:06.551 [Pipeline] echo 00:00:06.553 Sorcerer 10.211.164.101 is alive 00:00:06.560 [Pipeline] httpRequest 00:00:06.566 HttpMethod: GET 00:00:06.566 URL: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:06.567 Sending request to url: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:06.586 Response Code: HTTP/1.1 200 OK 00:00:06.586 Success: Status code 200 is in the accepted range: 200,404 00:00:06.587 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:27.250 [Pipeline] sh 00:00:27.531 + tar --no-same-owner -xf jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:27.548 [Pipeline] httpRequest 00:00:27.583 [Pipeline] echo 00:00:27.585 Sorcerer 10.211.164.101 is alive 00:00:27.595 [Pipeline] httpRequest 00:00:27.600 HttpMethod: GET 00:00:27.601 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:27.601 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:27.608 Response Code: HTTP/1.1 200 OK 00:00:27.608 Success: Status code 200 is in the accepted range: 200,404 00:00:27.609 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:11.495 [Pipeline] sh 00:01:11.773 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:15.074 [Pipeline] sh 00:01:15.354 + git -C spdk log --oneline -n5 00:01:15.354 2728651ee accel: adjust task per ch define name 00:01:15.354 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:15.354 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:15.354 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:01:15.354 719d03c6a sock/uring: only register net impl if supported 00:01:15.377 [Pipeline] writeFile 00:01:15.394 [Pipeline] sh 00:01:15.703 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:15.715 [Pipeline] sh 00:01:15.996 + cat autorun-spdk.conf 00:01:15.996 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.996 SPDK_TEST_NVMF=1 00:01:15.996 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.996 SPDK_TEST_URING=1 00:01:15.996 SPDK_TEST_USDT=1 00:01:15.996 SPDK_RUN_UBSAN=1 00:01:15.996 NET_TYPE=virt 00:01:15.996 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.002 RUN_NIGHTLY=0 00:01:16.004 [Pipeline] } 00:01:16.020 [Pipeline] // stage 00:01:16.040 [Pipeline] stage 00:01:16.043 [Pipeline] { (Run VM) 00:01:16.060 [Pipeline] sh 00:01:16.341 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:16.341 + echo 'Start stage prepare_nvme.sh' 00:01:16.341 Start stage prepare_nvme.sh 00:01:16.341 + [[ -n 0 ]] 00:01:16.341 + disk_prefix=ex0 00:01:16.341 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:16.341 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:16.341 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:16.341 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.341 ++ SPDK_TEST_NVMF=1 00:01:16.341 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.341 ++ SPDK_TEST_URING=1 00:01:16.341 ++ SPDK_TEST_USDT=1 00:01:16.341 ++ SPDK_RUN_UBSAN=1 00:01:16.341 ++ NET_TYPE=virt 00:01:16.341 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.341 ++ RUN_NIGHTLY=0 00:01:16.341 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:16.341 + nvme_files=() 00:01:16.341 + declare -A nvme_files 00:01:16.341 + backend_dir=/var/lib/libvirt/images/backends 00:01:16.341 + nvme_files['nvme.img']=5G 00:01:16.341 + nvme_files['nvme-cmb.img']=5G 00:01:16.341 + nvme_files['nvme-multi0.img']=4G 00:01:16.341 + nvme_files['nvme-multi1.img']=4G 00:01:16.341 + nvme_files['nvme-multi2.img']=4G 00:01:16.341 + nvme_files['nvme-openstack.img']=8G 00:01:16.341 + nvme_files['nvme-zns.img']=5G 00:01:16.341 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:16.341 + (( SPDK_TEST_FTL == 1 )) 00:01:16.341 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:16.341 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:16.341 + for nvme in "${!nvme_files[@]}" 00:01:16.341 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:16.341 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.341 + for nvme in "${!nvme_files[@]}" 00:01:16.341 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:16.341 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.341 + for nvme in "${!nvme_files[@]}" 00:01:16.341 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:16.341 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:16.341 + for nvme in "${!nvme_files[@]}" 00:01:16.341 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:16.341 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.341 + for nvme in "${!nvme_files[@]}" 00:01:16.341 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:16.341 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.341 + for nvme in "${!nvme_files[@]}" 00:01:16.341 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:16.341 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.341 + for nvme in "${!nvme_files[@]}" 00:01:16.341 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:16.600 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.600 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:16.600 + echo 'End stage prepare_nvme.sh' 00:01:16.600 End stage prepare_nvme.sh 00:01:16.612 [Pipeline] sh 00:01:16.892 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:16.893 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:01:16.893 00:01:16.893 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:16.893 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:16.893 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:16.893 HELP=0 00:01:16.893 DRY_RUN=0 00:01:16.893 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:16.893 NVME_DISKS_TYPE=nvme,nvme, 00:01:16.893 NVME_AUTO_CREATE=0 00:01:16.893 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:16.893 NVME_CMB=,, 00:01:16.893 NVME_PMR=,, 00:01:16.893 NVME_ZNS=,, 00:01:16.893 NVME_MS=,, 00:01:16.893 NVME_FDP=,, 00:01:16.893 SPDK_VAGRANT_DISTRO=fedora38 00:01:16.893 SPDK_VAGRANT_VMCPU=10 00:01:16.893 SPDK_VAGRANT_VMRAM=12288 00:01:16.893 SPDK_VAGRANT_PROVIDER=libvirt 00:01:16.893 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:16.893 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:16.893 SPDK_OPENSTACK_NETWORK=0 00:01:16.893 VAGRANT_PACKAGE_BOX=0 00:01:16.893 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:16.893 FORCE_DISTRO=true 00:01:16.893 VAGRANT_BOX_VERSION= 00:01:16.893 EXTRA_VAGRANTFILES= 00:01:16.893 NIC_MODEL=e1000 00:01:16.893 00:01:16.893 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:16.893 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:20.180 Bringing machine 'default' up with 'libvirt' provider... 00:01:21.115 ==> default: Creating image (snapshot of base box volume). 00:01:21.115 ==> default: Creating domain with the following settings... 00:01:21.115 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721046353_2d76943f0d01a07bdf02 00:01:21.115 ==> default: -- Domain type: kvm 00:01:21.115 ==> default: -- Cpus: 10 00:01:21.115 ==> default: -- Feature: acpi 00:01:21.115 ==> default: -- Feature: apic 00:01:21.115 ==> default: -- Feature: pae 00:01:21.115 ==> default: -- Memory: 12288M 00:01:21.115 ==> default: -- Memory Backing: hugepages: 00:01:21.115 ==> default: -- Management MAC: 00:01:21.115 ==> default: -- Loader: 00:01:21.115 ==> default: -- Nvram: 00:01:21.115 ==> default: -- Base box: spdk/fedora38 00:01:21.115 ==> default: -- Storage pool: default 00:01:21.115 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721046353_2d76943f0d01a07bdf02.img (20G) 00:01:21.115 ==> default: -- Volume Cache: default 00:01:21.115 ==> default: -- Kernel: 00:01:21.115 ==> default: -- Initrd: 00:01:21.115 ==> default: -- Graphics Type: vnc 00:01:21.115 ==> default: -- Graphics Port: -1 00:01:21.115 ==> default: -- Graphics IP: 127.0.0.1 00:01:21.115 ==> default: -- Graphics Password: Not defined 00:01:21.115 ==> default: -- Video Type: cirrus 00:01:21.115 ==> default: -- Video VRAM: 9216 00:01:21.115 ==> default: -- Sound Type: 00:01:21.115 ==> default: -- Keymap: en-us 00:01:21.115 ==> default: -- TPM Path: 00:01:21.115 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:21.115 ==> default: -- Command line args: 00:01:21.115 ==> default: -> value=-device, 00:01:21.115 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:21.115 ==> default: -> value=-drive, 00:01:21.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:21.115 ==> default: -> value=-device, 00:01:21.115 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.115 ==> default: -> value=-device, 00:01:21.115 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:21.115 ==> default: -> value=-drive, 00:01:21.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:21.115 ==> default: -> value=-device, 00:01:21.115 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.115 ==> default: -> value=-drive, 00:01:21.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:21.115 ==> default: -> value=-device, 00:01:21.115 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.115 ==> default: -> value=-drive, 00:01:21.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:21.115 ==> default: -> value=-device, 00:01:21.115 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.115 ==> default: Creating shared folders metadata... 00:01:21.115 ==> default: Starting domain. 00:01:23.020 ==> default: Waiting for domain to get an IP address... 00:01:41.099 ==> default: Waiting for SSH to become available... 00:01:41.099 ==> default: Configuring and enabling network interfaces... 00:01:44.383 default: SSH address: 192.168.121.13:22 00:01:44.383 default: SSH username: vagrant 00:01:44.383 default: SSH auth method: private key 00:01:46.287 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:54.401 ==> default: Mounting SSHFS shared folder... 00:01:55.776 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.776 ==> default: Checking Mount.. 00:01:57.152 ==> default: Folder Successfully Mounted! 00:01:57.152 ==> default: Running provisioner: file... 00:01:58.088 default: ~/.gitconfig => .gitconfig 00:01:58.346 00:01:58.346 SUCCESS! 00:01:58.346 00:01:58.346 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:58.346 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:58.346 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:58.346 00:01:58.356 [Pipeline] } 00:01:58.375 [Pipeline] // stage 00:01:58.386 [Pipeline] dir 00:01:58.386 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:58.388 [Pipeline] { 00:01:58.403 [Pipeline] catchError 00:01:58.405 [Pipeline] { 00:01:58.424 [Pipeline] sh 00:01:58.708 + vagrant ssh-config --host vagrant 00:01:58.708 + sed -ne /^Host/,$p 00:01:58.708 + tee ssh_conf 00:02:02.891 Host vagrant 00:02:02.891 HostName 192.168.121.13 00:02:02.891 User vagrant 00:02:02.891 Port 22 00:02:02.891 UserKnownHostsFile /dev/null 00:02:02.891 StrictHostKeyChecking no 00:02:02.891 PasswordAuthentication no 00:02:02.891 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:02.891 IdentitiesOnly yes 00:02:02.891 LogLevel FATAL 00:02:02.891 ForwardAgent yes 00:02:02.891 ForwardX11 yes 00:02:02.891 00:02:02.906 [Pipeline] withEnv 00:02:02.909 [Pipeline] { 00:02:02.937 [Pipeline] sh 00:02:03.216 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:03.216 source /etc/os-release 00:02:03.216 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.216 # Minimal, systemd-like check. 00:02:03.216 if [[ -e /.dockerenv ]]; then 00:02:03.216 # Clear garbage from the node's name: 00:02:03.216 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.216 # $HOSTNAME is the actual container id 00:02:03.216 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.216 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:03.216 # We can assume this is a mount from a host where container is running, 00:02:03.216 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.216 container="$(< /etc/hostname) ($agent)" 00:02:03.216 else 00:02:03.216 # Fallback 00:02:03.216 container=$agent 00:02:03.216 fi 00:02:03.216 fi 00:02:03.216 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.216 00:02:03.483 [Pipeline] } 00:02:03.501 [Pipeline] // withEnv 00:02:03.506 [Pipeline] setCustomBuildProperty 00:02:03.525 [Pipeline] stage 00:02:03.527 [Pipeline] { (Tests) 00:02:03.547 [Pipeline] sh 00:02:03.825 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.147 [Pipeline] sh 00:02:04.431 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:04.699 [Pipeline] timeout 00:02:04.699 Timeout set to expire in 30 min 00:02:04.700 [Pipeline] { 00:02:04.712 [Pipeline] sh 00:02:04.984 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:05.550 HEAD is now at 2728651ee accel: adjust task per ch define name 00:02:05.564 [Pipeline] sh 00:02:05.849 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:06.120 [Pipeline] sh 00:02:06.402 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:06.677 [Pipeline] sh 00:02:06.957 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:07.216 ++ readlink -f spdk_repo 00:02:07.216 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:07.216 + [[ -n /home/vagrant/spdk_repo ]] 00:02:07.216 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:07.216 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:07.216 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:07.216 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:07.216 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:07.216 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:07.216 + cd /home/vagrant/spdk_repo 00:02:07.216 + source /etc/os-release 00:02:07.216 ++ NAME='Fedora Linux' 00:02:07.216 ++ VERSION='38 (Cloud Edition)' 00:02:07.216 ++ ID=fedora 00:02:07.216 ++ VERSION_ID=38 00:02:07.216 ++ VERSION_CODENAME= 00:02:07.216 ++ PLATFORM_ID=platform:f38 00:02:07.216 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:07.216 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.216 ++ LOGO=fedora-logo-icon 00:02:07.216 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:07.216 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.216 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:07.216 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.216 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.216 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.216 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:07.216 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.216 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:07.216 ++ SUPPORT_END=2024-05-14 00:02:07.216 ++ VARIANT='Cloud Edition' 00:02:07.216 ++ VARIANT_ID=cloud 00:02:07.216 + uname -a 00:02:07.216 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:07.216 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:07.783 Hugepages 00:02:07.783 node hugesize free / total 00:02:07.783 node0 1048576kB 0 / 0 00:02:07.783 node0 2048kB 0 / 0 00:02:07.783 00:02:07.783 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.783 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:07.783 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:07.783 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:07.783 + rm -f /tmp/spdk-ld-path 00:02:07.783 + source autorun-spdk.conf 00:02:07.783 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.783 ++ SPDK_TEST_NVMF=1 00:02:07.783 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.783 ++ SPDK_TEST_URING=1 00:02:07.783 ++ SPDK_TEST_USDT=1 00:02:07.783 ++ SPDK_RUN_UBSAN=1 00:02:07.783 ++ NET_TYPE=virt 00:02:07.783 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.783 ++ RUN_NIGHTLY=0 00:02:07.783 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.783 + [[ -n '' ]] 00:02:07.783 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:07.783 + for M in /var/spdk/build-*-manifest.txt 00:02:07.783 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.783 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.783 + for M in /var/spdk/build-*-manifest.txt 00:02:07.783 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.783 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.783 ++ uname 00:02:07.783 + [[ Linux == \L\i\n\u\x ]] 00:02:07.783 + sudo dmesg -T 00:02:07.783 + sudo dmesg --clear 00:02:07.783 + dmesg_pid=5115 00:02:07.783 + [[ Fedora Linux == FreeBSD ]] 00:02:07.783 + sudo dmesg -Tw 00:02:07.783 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.783 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.783 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.783 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.783 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.783 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.783 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.783 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.783 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.783 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.783 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.783 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.783 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.783 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.783 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.783 Test configuration: 00:02:07.783 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.783 SPDK_TEST_NVMF=1 00:02:07.783 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.783 SPDK_TEST_URING=1 00:02:07.783 SPDK_TEST_USDT=1 00:02:07.783 SPDK_RUN_UBSAN=1 00:02:07.783 NET_TYPE=virt 00:02:07.783 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.043 RUN_NIGHTLY=0 12:26:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.043 12:26:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.043 12:26:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.043 12:26:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.043 12:26:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.043 12:26:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.043 12:26:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.043 12:26:40 -- paths/export.sh@5 -- $ export PATH 00:02:08.043 12:26:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.043 12:26:40 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.043 12:26:40 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:08.043 12:26:40 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721046400.XXXXXX 00:02:08.043 12:26:40 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721046400.HGUKhM 00:02:08.043 12:26:40 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:08.043 12:26:40 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:08.043 12:26:40 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:08.043 12:26:40 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.043 12:26:40 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.043 12:26:40 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:08.043 12:26:40 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:08.043 12:26:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.043 12:26:40 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:08.043 12:26:40 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:08.043 12:26:40 -- pm/common@17 -- $ local monitor 00:02:08.043 12:26:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.043 12:26:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.043 12:26:40 -- pm/common@25 -- $ sleep 1 00:02:08.043 12:26:40 -- pm/common@21 -- $ date +%s 00:02:08.043 12:26:40 -- pm/common@21 -- $ date +%s 00:02:08.043 12:26:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721046400 00:02:08.043 12:26:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721046400 00:02:08.043 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721046400_collect-vmstat.pm.log 00:02:08.043 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721046400_collect-cpu-load.pm.log 00:02:08.979 12:26:41 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:08.979 12:26:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.979 12:26:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.979 12:26:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.979 12:26:41 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.979 Mon Jul 15 12:26:41 PM UTC 2024 00:02:08.979 12:26:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.979 v24.09-pre-206-g2728651ee 00:02:08.979 12:26:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:08.979 12:26:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.979 12:26:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.979 12:26:41 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:08.979 12:26:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:08.979 12:26:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.979 ************************************ 00:02:08.979 START TEST ubsan 00:02:08.979 ************************************ 00:02:08.979 using ubsan 00:02:08.980 12:26:41 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:08.980 00:02:08.980 real 0m0.000s 00:02:08.980 user 0m0.000s 00:02:08.980 sys 0m0.000s 00:02:08.980 12:26:41 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:08.980 ************************************ 00:02:08.980 12:26:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.980 END TEST ubsan 00:02:08.980 ************************************ 00:02:08.980 12:26:41 -- common/autotest_common.sh@1142 -- $ return 0 00:02:08.980 12:26:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.980 12:26:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.980 12:26:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.980 12:26:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.980 12:26:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.980 12:26:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.980 12:26:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.980 12:26:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.980 12:26:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:09.238 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:09.238 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.805 Using 'verbs' RDMA provider 00:02:25.621 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:37.820 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:37.820 Creating mk/config.mk...done. 00:02:37.820 Creating mk/cc.flags.mk...done. 00:02:37.820 Type 'make' to build. 00:02:37.820 12:27:09 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:37.820 12:27:09 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:37.820 12:27:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:37.820 12:27:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.820 ************************************ 00:02:37.820 START TEST make 00:02:37.820 ************************************ 00:02:37.820 12:27:09 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:37.820 make[1]: Nothing to be done for 'all'. 00:02:50.024 The Meson build system 00:02:50.024 Version: 1.3.1 00:02:50.024 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:50.024 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:50.024 Build type: native build 00:02:50.024 Program cat found: YES (/usr/bin/cat) 00:02:50.024 Project name: DPDK 00:02:50.024 Project version: 24.03.0 00:02:50.024 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:50.024 C linker for the host machine: cc ld.bfd 2.39-16 00:02:50.024 Host machine cpu family: x86_64 00:02:50.024 Host machine cpu: x86_64 00:02:50.024 Message: ## Building in Developer Mode ## 00:02:50.024 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:50.024 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:50.024 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:50.024 Program python3 found: YES (/usr/bin/python3) 00:02:50.024 Program cat found: YES (/usr/bin/cat) 00:02:50.024 Compiler for C supports arguments -march=native: YES 00:02:50.024 Checking for size of "void *" : 8 00:02:50.024 Checking for size of "void *" : 8 (cached) 00:02:50.024 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:50.024 Library m found: YES 00:02:50.024 Library numa found: YES 00:02:50.024 Has header "numaif.h" : YES 00:02:50.024 Library fdt found: NO 00:02:50.024 Library execinfo found: NO 00:02:50.024 Has header "execinfo.h" : YES 00:02:50.024 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:50.024 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:50.024 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:50.024 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:50.024 Run-time dependency openssl found: YES 3.0.9 00:02:50.024 Run-time dependency libpcap found: YES 1.10.4 00:02:50.024 Has header "pcap.h" with dependency libpcap: YES 00:02:50.024 Compiler for C supports arguments -Wcast-qual: YES 00:02:50.024 Compiler for C supports arguments -Wdeprecated: YES 00:02:50.024 Compiler for C supports arguments -Wformat: YES 00:02:50.024 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:50.024 Compiler for C supports arguments -Wformat-security: NO 00:02:50.024 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.024 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:50.024 Compiler for C supports arguments -Wnested-externs: YES 00:02:50.024 Compiler for C supports arguments -Wold-style-definition: YES 00:02:50.024 Compiler for C supports arguments -Wpointer-arith: YES 00:02:50.024 Compiler for C supports arguments -Wsign-compare: YES 00:02:50.024 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:50.024 Compiler for C supports arguments -Wundef: YES 00:02:50.024 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.024 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:50.024 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:50.024 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.024 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:50.024 Program objdump found: YES (/usr/bin/objdump) 00:02:50.024 Compiler for C supports arguments -mavx512f: YES 00:02:50.024 Checking if "AVX512 checking" compiles: YES 00:02:50.024 Fetching value of define "__SSE4_2__" : 1 00:02:50.024 Fetching value of define "__AES__" : 1 00:02:50.024 Fetching value of define "__AVX__" : 1 00:02:50.024 Fetching value of define "__AVX2__" : 1 00:02:50.024 Fetching value of define "__AVX512BW__" : (undefined) 00:02:50.024 Fetching value of define "__AVX512CD__" : (undefined) 00:02:50.024 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:50.024 Fetching value of define "__AVX512F__" : (undefined) 00:02:50.024 Fetching value of define "__AVX512VL__" : (undefined) 00:02:50.024 Fetching value of define "__PCLMUL__" : 1 00:02:50.024 Fetching value of define "__RDRND__" : 1 00:02:50.024 Fetching value of define "__RDSEED__" : 1 00:02:50.024 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:50.024 Fetching value of define "__znver1__" : (undefined) 00:02:50.024 Fetching value of define "__znver2__" : (undefined) 00:02:50.024 Fetching value of define "__znver3__" : (undefined) 00:02:50.024 Fetching value of define "__znver4__" : (undefined) 00:02:50.024 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:50.024 Message: lib/log: Defining dependency "log" 00:02:50.024 Message: lib/kvargs: Defining dependency "kvargs" 00:02:50.024 Message: lib/telemetry: Defining dependency "telemetry" 00:02:50.024 Checking for function "getentropy" : NO 00:02:50.024 Message: lib/eal: Defining dependency "eal" 00:02:50.024 Message: lib/ring: Defining dependency "ring" 00:02:50.024 Message: lib/rcu: Defining dependency "rcu" 00:02:50.024 Message: lib/mempool: Defining dependency "mempool" 00:02:50.025 Message: lib/mbuf: Defining dependency "mbuf" 00:02:50.025 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:50.025 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:50.025 Compiler for C supports arguments -mpclmul: YES 00:02:50.025 Compiler for C supports arguments -maes: YES 00:02:50.025 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:50.025 Compiler for C supports arguments -mavx512bw: YES 00:02:50.025 Compiler for C supports arguments -mavx512dq: YES 00:02:50.025 Compiler for C supports arguments -mavx512vl: YES 00:02:50.025 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:50.025 Compiler for C supports arguments -mavx2: YES 00:02:50.025 Compiler for C supports arguments -mavx: YES 00:02:50.025 Message: lib/net: Defining dependency "net" 00:02:50.025 Message: lib/meter: Defining dependency "meter" 00:02:50.025 Message: lib/ethdev: Defining dependency "ethdev" 00:02:50.025 Message: lib/pci: Defining dependency "pci" 00:02:50.025 Message: lib/cmdline: Defining dependency "cmdline" 00:02:50.025 Message: lib/hash: Defining dependency "hash" 00:02:50.025 Message: lib/timer: Defining dependency "timer" 00:02:50.025 Message: lib/compressdev: Defining dependency "compressdev" 00:02:50.025 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:50.025 Message: lib/dmadev: Defining dependency "dmadev" 00:02:50.025 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:50.025 Message: lib/power: Defining dependency "power" 00:02:50.025 Message: lib/reorder: Defining dependency "reorder" 00:02:50.025 Message: lib/security: Defining dependency "security" 00:02:50.025 Has header "linux/userfaultfd.h" : YES 00:02:50.025 Has header "linux/vduse.h" : YES 00:02:50.025 Message: lib/vhost: Defining dependency "vhost" 00:02:50.025 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:50.025 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:50.025 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:50.025 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:50.025 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:50.025 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:50.025 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:50.025 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:50.025 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:50.025 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:50.025 Program doxygen found: YES (/usr/bin/doxygen) 00:02:50.025 Configuring doxy-api-html.conf using configuration 00:02:50.025 Configuring doxy-api-man.conf using configuration 00:02:50.025 Program mandb found: YES (/usr/bin/mandb) 00:02:50.025 Program sphinx-build found: NO 00:02:50.025 Configuring rte_build_config.h using configuration 00:02:50.025 Message: 00:02:50.025 ================= 00:02:50.025 Applications Enabled 00:02:50.025 ================= 00:02:50.025 00:02:50.025 apps: 00:02:50.025 00:02:50.025 00:02:50.025 Message: 00:02:50.025 ================= 00:02:50.025 Libraries Enabled 00:02:50.025 ================= 00:02:50.025 00:02:50.025 libs: 00:02:50.025 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:50.025 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:50.025 cryptodev, dmadev, power, reorder, security, vhost, 00:02:50.025 00:02:50.025 Message: 00:02:50.025 =============== 00:02:50.025 Drivers Enabled 00:02:50.025 =============== 00:02:50.025 00:02:50.025 common: 00:02:50.025 00:02:50.025 bus: 00:02:50.025 pci, vdev, 00:02:50.025 mempool: 00:02:50.025 ring, 00:02:50.025 dma: 00:02:50.025 00:02:50.025 net: 00:02:50.025 00:02:50.025 crypto: 00:02:50.025 00:02:50.025 compress: 00:02:50.025 00:02:50.025 vdpa: 00:02:50.025 00:02:50.025 00:02:50.025 Message: 00:02:50.025 ================= 00:02:50.025 Content Skipped 00:02:50.025 ================= 00:02:50.025 00:02:50.025 apps: 00:02:50.025 dumpcap: explicitly disabled via build config 00:02:50.025 graph: explicitly disabled via build config 00:02:50.025 pdump: explicitly disabled via build config 00:02:50.025 proc-info: explicitly disabled via build config 00:02:50.025 test-acl: explicitly disabled via build config 00:02:50.025 test-bbdev: explicitly disabled via build config 00:02:50.025 test-cmdline: explicitly disabled via build config 00:02:50.025 test-compress-perf: explicitly disabled via build config 00:02:50.025 test-crypto-perf: explicitly disabled via build config 00:02:50.025 test-dma-perf: explicitly disabled via build config 00:02:50.025 test-eventdev: explicitly disabled via build config 00:02:50.025 test-fib: explicitly disabled via build config 00:02:50.025 test-flow-perf: explicitly disabled via build config 00:02:50.025 test-gpudev: explicitly disabled via build config 00:02:50.025 test-mldev: explicitly disabled via build config 00:02:50.025 test-pipeline: explicitly disabled via build config 00:02:50.025 test-pmd: explicitly disabled via build config 00:02:50.025 test-regex: explicitly disabled via build config 00:02:50.025 test-sad: explicitly disabled via build config 00:02:50.025 test-security-perf: explicitly disabled via build config 00:02:50.025 00:02:50.025 libs: 00:02:50.025 argparse: explicitly disabled via build config 00:02:50.025 metrics: explicitly disabled via build config 00:02:50.025 acl: explicitly disabled via build config 00:02:50.025 bbdev: explicitly disabled via build config 00:02:50.025 bitratestats: explicitly disabled via build config 00:02:50.025 bpf: explicitly disabled via build config 00:02:50.025 cfgfile: explicitly disabled via build config 00:02:50.025 distributor: explicitly disabled via build config 00:02:50.025 efd: explicitly disabled via build config 00:02:50.025 eventdev: explicitly disabled via build config 00:02:50.025 dispatcher: explicitly disabled via build config 00:02:50.025 gpudev: explicitly disabled via build config 00:02:50.025 gro: explicitly disabled via build config 00:02:50.025 gso: explicitly disabled via build config 00:02:50.025 ip_frag: explicitly disabled via build config 00:02:50.025 jobstats: explicitly disabled via build config 00:02:50.025 latencystats: explicitly disabled via build config 00:02:50.025 lpm: explicitly disabled via build config 00:02:50.025 member: explicitly disabled via build config 00:02:50.025 pcapng: explicitly disabled via build config 00:02:50.025 rawdev: explicitly disabled via build config 00:02:50.025 regexdev: explicitly disabled via build config 00:02:50.025 mldev: explicitly disabled via build config 00:02:50.025 rib: explicitly disabled via build config 00:02:50.025 sched: explicitly disabled via build config 00:02:50.025 stack: explicitly disabled via build config 00:02:50.025 ipsec: explicitly disabled via build config 00:02:50.025 pdcp: explicitly disabled via build config 00:02:50.025 fib: explicitly disabled via build config 00:02:50.025 port: explicitly disabled via build config 00:02:50.025 pdump: explicitly disabled via build config 00:02:50.025 table: explicitly disabled via build config 00:02:50.025 pipeline: explicitly disabled via build config 00:02:50.025 graph: explicitly disabled via build config 00:02:50.025 node: explicitly disabled via build config 00:02:50.025 00:02:50.025 drivers: 00:02:50.025 common/cpt: not in enabled drivers build config 00:02:50.025 common/dpaax: not in enabled drivers build config 00:02:50.025 common/iavf: not in enabled drivers build config 00:02:50.025 common/idpf: not in enabled drivers build config 00:02:50.025 common/ionic: not in enabled drivers build config 00:02:50.025 common/mvep: not in enabled drivers build config 00:02:50.025 common/octeontx: not in enabled drivers build config 00:02:50.025 bus/auxiliary: not in enabled drivers build config 00:02:50.025 bus/cdx: not in enabled drivers build config 00:02:50.025 bus/dpaa: not in enabled drivers build config 00:02:50.025 bus/fslmc: not in enabled drivers build config 00:02:50.025 bus/ifpga: not in enabled drivers build config 00:02:50.025 bus/platform: not in enabled drivers build config 00:02:50.025 bus/uacce: not in enabled drivers build config 00:02:50.025 bus/vmbus: not in enabled drivers build config 00:02:50.025 common/cnxk: not in enabled drivers build config 00:02:50.025 common/mlx5: not in enabled drivers build config 00:02:50.025 common/nfp: not in enabled drivers build config 00:02:50.025 common/nitrox: not in enabled drivers build config 00:02:50.025 common/qat: not in enabled drivers build config 00:02:50.025 common/sfc_efx: not in enabled drivers build config 00:02:50.025 mempool/bucket: not in enabled drivers build config 00:02:50.025 mempool/cnxk: not in enabled drivers build config 00:02:50.025 mempool/dpaa: not in enabled drivers build config 00:02:50.025 mempool/dpaa2: not in enabled drivers build config 00:02:50.025 mempool/octeontx: not in enabled drivers build config 00:02:50.025 mempool/stack: not in enabled drivers build config 00:02:50.025 dma/cnxk: not in enabled drivers build config 00:02:50.025 dma/dpaa: not in enabled drivers build config 00:02:50.025 dma/dpaa2: not in enabled drivers build config 00:02:50.025 dma/hisilicon: not in enabled drivers build config 00:02:50.025 dma/idxd: not in enabled drivers build config 00:02:50.025 dma/ioat: not in enabled drivers build config 00:02:50.025 dma/skeleton: not in enabled drivers build config 00:02:50.025 net/af_packet: not in enabled drivers build config 00:02:50.025 net/af_xdp: not in enabled drivers build config 00:02:50.025 net/ark: not in enabled drivers build config 00:02:50.025 net/atlantic: not in enabled drivers build config 00:02:50.025 net/avp: not in enabled drivers build config 00:02:50.025 net/axgbe: not in enabled drivers build config 00:02:50.025 net/bnx2x: not in enabled drivers build config 00:02:50.025 net/bnxt: not in enabled drivers build config 00:02:50.025 net/bonding: not in enabled drivers build config 00:02:50.025 net/cnxk: not in enabled drivers build config 00:02:50.025 net/cpfl: not in enabled drivers build config 00:02:50.025 net/cxgbe: not in enabled drivers build config 00:02:50.025 net/dpaa: not in enabled drivers build config 00:02:50.025 net/dpaa2: not in enabled drivers build config 00:02:50.025 net/e1000: not in enabled drivers build config 00:02:50.025 net/ena: not in enabled drivers build config 00:02:50.025 net/enetc: not in enabled drivers build config 00:02:50.025 net/enetfec: not in enabled drivers build config 00:02:50.025 net/enic: not in enabled drivers build config 00:02:50.025 net/failsafe: not in enabled drivers build config 00:02:50.025 net/fm10k: not in enabled drivers build config 00:02:50.025 net/gve: not in enabled drivers build config 00:02:50.025 net/hinic: not in enabled drivers build config 00:02:50.025 net/hns3: not in enabled drivers build config 00:02:50.025 net/i40e: not in enabled drivers build config 00:02:50.026 net/iavf: not in enabled drivers build config 00:02:50.026 net/ice: not in enabled drivers build config 00:02:50.026 net/idpf: not in enabled drivers build config 00:02:50.026 net/igc: not in enabled drivers build config 00:02:50.026 net/ionic: not in enabled drivers build config 00:02:50.026 net/ipn3ke: not in enabled drivers build config 00:02:50.026 net/ixgbe: not in enabled drivers build config 00:02:50.026 net/mana: not in enabled drivers build config 00:02:50.026 net/memif: not in enabled drivers build config 00:02:50.026 net/mlx4: not in enabled drivers build config 00:02:50.026 net/mlx5: not in enabled drivers build config 00:02:50.026 net/mvneta: not in enabled drivers build config 00:02:50.026 net/mvpp2: not in enabled drivers build config 00:02:50.026 net/netvsc: not in enabled drivers build config 00:02:50.026 net/nfb: not in enabled drivers build config 00:02:50.026 net/nfp: not in enabled drivers build config 00:02:50.026 net/ngbe: not in enabled drivers build config 00:02:50.026 net/null: not in enabled drivers build config 00:02:50.026 net/octeontx: not in enabled drivers build config 00:02:50.026 net/octeon_ep: not in enabled drivers build config 00:02:50.026 net/pcap: not in enabled drivers build config 00:02:50.026 net/pfe: not in enabled drivers build config 00:02:50.026 net/qede: not in enabled drivers build config 00:02:50.026 net/ring: not in enabled drivers build config 00:02:50.026 net/sfc: not in enabled drivers build config 00:02:50.026 net/softnic: not in enabled drivers build config 00:02:50.026 net/tap: not in enabled drivers build config 00:02:50.026 net/thunderx: not in enabled drivers build config 00:02:50.026 net/txgbe: not in enabled drivers build config 00:02:50.026 net/vdev_netvsc: not in enabled drivers build config 00:02:50.026 net/vhost: not in enabled drivers build config 00:02:50.026 net/virtio: not in enabled drivers build config 00:02:50.026 net/vmxnet3: not in enabled drivers build config 00:02:50.026 raw/*: missing internal dependency, "rawdev" 00:02:50.026 crypto/armv8: not in enabled drivers build config 00:02:50.026 crypto/bcmfs: not in enabled drivers build config 00:02:50.026 crypto/caam_jr: not in enabled drivers build config 00:02:50.026 crypto/ccp: not in enabled drivers build config 00:02:50.026 crypto/cnxk: not in enabled drivers build config 00:02:50.026 crypto/dpaa_sec: not in enabled drivers build config 00:02:50.026 crypto/dpaa2_sec: not in enabled drivers build config 00:02:50.026 crypto/ipsec_mb: not in enabled drivers build config 00:02:50.026 crypto/mlx5: not in enabled drivers build config 00:02:50.026 crypto/mvsam: not in enabled drivers build config 00:02:50.026 crypto/nitrox: not in enabled drivers build config 00:02:50.026 crypto/null: not in enabled drivers build config 00:02:50.026 crypto/octeontx: not in enabled drivers build config 00:02:50.026 crypto/openssl: not in enabled drivers build config 00:02:50.026 crypto/scheduler: not in enabled drivers build config 00:02:50.026 crypto/uadk: not in enabled drivers build config 00:02:50.026 crypto/virtio: not in enabled drivers build config 00:02:50.026 compress/isal: not in enabled drivers build config 00:02:50.026 compress/mlx5: not in enabled drivers build config 00:02:50.026 compress/nitrox: not in enabled drivers build config 00:02:50.026 compress/octeontx: not in enabled drivers build config 00:02:50.026 compress/zlib: not in enabled drivers build config 00:02:50.026 regex/*: missing internal dependency, "regexdev" 00:02:50.026 ml/*: missing internal dependency, "mldev" 00:02:50.026 vdpa/ifc: not in enabled drivers build config 00:02:50.026 vdpa/mlx5: not in enabled drivers build config 00:02:50.026 vdpa/nfp: not in enabled drivers build config 00:02:50.026 vdpa/sfc: not in enabled drivers build config 00:02:50.026 event/*: missing internal dependency, "eventdev" 00:02:50.026 baseband/*: missing internal dependency, "bbdev" 00:02:50.026 gpu/*: missing internal dependency, "gpudev" 00:02:50.026 00:02:50.026 00:02:50.026 Build targets in project: 85 00:02:50.026 00:02:50.026 DPDK 24.03.0 00:02:50.026 00:02:50.026 User defined options 00:02:50.026 buildtype : debug 00:02:50.026 default_library : shared 00:02:50.026 libdir : lib 00:02:50.026 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:50.026 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:50.026 c_link_args : 00:02:50.026 cpu_instruction_set: native 00:02:50.026 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:50.026 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:50.026 enable_docs : false 00:02:50.026 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:50.026 enable_kmods : false 00:02:50.026 max_lcores : 128 00:02:50.026 tests : false 00:02:50.026 00:02:50.026 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.591 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:50.591 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.591 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.591 [3/268] Linking static target lib/librte_kvargs.a 00:02:50.591 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.591 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.591 [6/268] Linking static target lib/librte_log.a 00:02:51.155 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.155 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:51.156 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:51.156 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:51.414 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:51.414 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:51.414 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:51.414 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:51.672 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.672 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:51.672 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.672 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:51.672 [19/268] Linking target lib/librte_log.so.24.1 00:02:51.672 [20/268] Linking static target lib/librte_telemetry.a 00:02:51.931 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:51.931 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.931 [23/268] Linking target lib/librte_kvargs.so.24.1 00:02:51.931 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:52.190 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.190 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:52.447 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:52.447 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:52.447 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.447 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:52.705 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:52.705 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:52.705 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:52.705 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:52.961 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:52.961 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:52.961 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:53.218 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.218 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.218 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:53.218 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:53.218 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.218 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:53.475 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.475 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.475 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.475 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.733 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.733 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.991 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:53.991 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.991 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.248 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.248 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.525 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:54.525 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:54.525 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.525 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:54.525 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.790 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:54.790 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:55.048 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.048 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.048 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.306 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:55.306 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.306 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:55.306 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:55.306 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:55.564 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:55.564 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:55.564 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:55.821 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:55.821 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:55.822 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:55.822 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.822 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.822 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:56.080 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:56.080 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:56.337 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:56.337 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:56.337 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:56.337 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:56.337 [85/268] Linking static target lib/librte_eal.a 00:02:56.595 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:56.595 [87/268] Linking static target lib/librte_rcu.a 00:02:56.595 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:56.595 [89/268] Linking static target lib/librte_ring.a 00:02:56.595 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.595 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:56.854 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:56.854 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.854 [94/268] Linking static target lib/librte_mempool.a 00:02:57.113 [95/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.113 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:57.113 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:57.113 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:57.113 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.372 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:57.372 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:57.372 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:57.372 [103/268] Linking static target lib/librte_mbuf.a 00:02:57.631 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.631 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.631 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:57.889 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:57.889 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.889 [109/268] Linking static target lib/librte_net.a 00:02:58.148 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:58.148 [111/268] Linking static target lib/librte_meter.a 00:02:58.148 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.406 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.406 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.406 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.406 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.406 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.665 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.665 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.924 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:59.183 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:59.441 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:59.441 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:59.700 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.700 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.958 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.958 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.958 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:59.958 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.958 [130/268] Linking static target lib/librte_pci.a 00:02:59.958 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:59.958 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.958 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.958 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.958 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.958 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:00.217 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:00.217 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:00.217 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:00.217 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:00.217 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:00.217 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:00.217 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.217 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:00.217 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:00.475 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:00.475 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:00.737 [148/268] Linking static target lib/librte_ethdev.a 00:03:00.737 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:00.737 [150/268] Linking static target lib/librte_cmdline.a 00:03:00.737 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.996 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.996 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:00.996 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.996 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.996 [156/268] Linking static target lib/librte_timer.a 00:03:01.253 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:01.253 [158/268] Linking static target lib/librte_hash.a 00:03:01.253 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:01.510 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:01.767 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:01.767 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.767 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:01.767 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:01.767 [165/268] Linking static target lib/librte_compressdev.a 00:03:02.023 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:02.023 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:02.279 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:02.279 [169/268] Linking static target lib/librte_dmadev.a 00:03:02.279 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:02.279 [171/268] Linking static target lib/librte_cryptodev.a 00:03:02.279 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.279 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.279 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:02.279 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:02.279 [176/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:02.537 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.537 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.794 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:02.794 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:02.794 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.794 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:03.053 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:03.053 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:03.311 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:03.311 [186/268] Linking static target lib/librte_power.a 00:03:03.311 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:03.311 [188/268] Linking static target lib/librte_reorder.a 00:03:03.569 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:03.569 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.569 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:03.569 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:03.569 [193/268] Linking static target lib/librte_security.a 00:03:03.826 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.827 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.392 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.392 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.392 [198/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.392 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.392 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.392 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.650 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.912 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.912 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.912 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.912 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.912 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.170 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.170 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.170 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.170 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.170 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:05.429 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.429 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.429 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.429 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.429 [217/268] Linking static target drivers/librte_bus_vdev.a 00:03:05.429 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.429 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.429 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:05.429 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.429 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.687 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.687 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.687 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.687 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.687 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:05.945 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.511 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.511 [230/268] Linking static target lib/librte_vhost.a 00:03:07.445 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.445 [232/268] Linking target lib/librte_eal.so.24.1 00:03:07.445 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:07.445 [234/268] Linking target lib/librte_timer.so.24.1 00:03:07.445 [235/268] Linking target lib/librte_pci.so.24.1 00:03:07.445 [236/268] Linking target lib/librte_meter.so.24.1 00:03:07.445 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:07.445 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:07.445 [239/268] Linking target lib/librte_ring.so.24.1 00:03:07.702 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:07.702 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:07.702 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:07.702 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:07.702 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:07.702 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:07.702 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:07.702 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:07.960 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:07.960 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:07.960 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:07.960 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:07.960 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.960 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.275 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:08.275 [255/268] Linking target lib/librte_net.so.24.1 00:03:08.275 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:08.275 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:08.275 [258/268] Linking target lib/librte_compressdev.so.24.1 00:03:08.275 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:08.275 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:08.275 [261/268] Linking target lib/librte_security.so.24.1 00:03:08.275 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:08.275 [263/268] Linking target lib/librte_hash.so.24.1 00:03:08.533 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:08.533 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:08.533 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:08.533 [267/268] Linking target lib/librte_power.so.24.1 00:03:08.791 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:08.791 INFO: autodetecting backend as ninja 00:03:08.791 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:10.166 CC lib/log/log.o 00:03:10.166 CC lib/log/log_flags.o 00:03:10.166 CC lib/log/log_deprecated.o 00:03:10.166 CC lib/ut/ut.o 00:03:10.166 CC lib/ut_mock/mock.o 00:03:10.166 LIB libspdk_ut.a 00:03:10.166 LIB libspdk_log.a 00:03:10.166 LIB libspdk_ut_mock.a 00:03:10.166 SO libspdk_ut.so.2.0 00:03:10.166 SO libspdk_ut_mock.so.6.0 00:03:10.166 SO libspdk_log.so.7.0 00:03:10.166 SYMLINK libspdk_ut_mock.so 00:03:10.166 SYMLINK libspdk_ut.so 00:03:10.166 SYMLINK libspdk_log.so 00:03:10.424 CC lib/util/base64.o 00:03:10.424 CC lib/ioat/ioat.o 00:03:10.424 CC lib/util/bit_array.o 00:03:10.424 CC lib/util/cpuset.o 00:03:10.424 CC lib/dma/dma.o 00:03:10.424 CC lib/util/crc16.o 00:03:10.424 CC lib/util/crc32.o 00:03:10.424 CC lib/util/crc32c.o 00:03:10.424 CXX lib/trace_parser/trace.o 00:03:10.680 CC lib/vfio_user/host/vfio_user_pci.o 00:03:10.680 CC lib/util/crc32_ieee.o 00:03:10.680 CC lib/vfio_user/host/vfio_user.o 00:03:10.680 CC lib/util/crc64.o 00:03:10.680 CC lib/util/dif.o 00:03:10.680 LIB libspdk_dma.a 00:03:10.680 CC lib/util/fd.o 00:03:10.680 CC lib/util/file.o 00:03:10.680 SO libspdk_dma.so.4.0 00:03:10.938 LIB libspdk_ioat.a 00:03:10.938 SYMLINK libspdk_dma.so 00:03:10.938 CC lib/util/hexlify.o 00:03:10.938 CC lib/util/iov.o 00:03:10.938 CC lib/util/math.o 00:03:10.938 SO libspdk_ioat.so.7.0 00:03:10.938 LIB libspdk_vfio_user.a 00:03:10.938 CC lib/util/pipe.o 00:03:10.938 CC lib/util/strerror_tls.o 00:03:10.938 CC lib/util/string.o 00:03:10.938 SO libspdk_vfio_user.so.5.0 00:03:10.938 SYMLINK libspdk_ioat.so 00:03:10.938 CC lib/util/uuid.o 00:03:10.938 CC lib/util/fd_group.o 00:03:10.938 CC lib/util/xor.o 00:03:10.938 SYMLINK libspdk_vfio_user.so 00:03:10.938 CC lib/util/zipf.o 00:03:11.196 LIB libspdk_util.a 00:03:11.453 SO libspdk_util.so.9.1 00:03:11.712 SYMLINK libspdk_util.so 00:03:11.712 LIB libspdk_trace_parser.a 00:03:11.712 SO libspdk_trace_parser.so.5.0 00:03:11.712 SYMLINK libspdk_trace_parser.so 00:03:11.712 CC lib/vmd/vmd.o 00:03:11.712 CC lib/vmd/led.o 00:03:11.712 CC lib/json/json_parse.o 00:03:11.712 CC lib/json/json_util.o 00:03:11.712 CC lib/json/json_write.o 00:03:11.712 CC lib/rdma_utils/rdma_utils.o 00:03:11.712 CC lib/conf/conf.o 00:03:11.712 CC lib/rdma_provider/common.o 00:03:11.712 CC lib/idxd/idxd.o 00:03:11.712 CC lib/env_dpdk/env.o 00:03:11.970 CC lib/env_dpdk/memory.o 00:03:11.970 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:11.970 LIB libspdk_conf.a 00:03:11.970 CC lib/env_dpdk/pci.o 00:03:11.970 SO libspdk_conf.so.6.0 00:03:11.970 CC lib/env_dpdk/init.o 00:03:11.970 LIB libspdk_rdma_utils.a 00:03:12.229 LIB libspdk_json.a 00:03:12.229 SO libspdk_rdma_utils.so.1.0 00:03:12.229 SO libspdk_json.so.6.0 00:03:12.229 SYMLINK libspdk_conf.so 00:03:12.229 CC lib/env_dpdk/threads.o 00:03:12.229 SYMLINK libspdk_rdma_utils.so 00:03:12.229 CC lib/idxd/idxd_user.o 00:03:12.229 LIB libspdk_rdma_provider.a 00:03:12.229 SYMLINK libspdk_json.so 00:03:12.229 CC lib/idxd/idxd_kernel.o 00:03:12.229 SO libspdk_rdma_provider.so.6.0 00:03:12.229 SYMLINK libspdk_rdma_provider.so 00:03:12.229 CC lib/env_dpdk/pci_ioat.o 00:03:12.229 CC lib/env_dpdk/pci_virtio.o 00:03:12.488 CC lib/env_dpdk/pci_vmd.o 00:03:12.488 CC lib/env_dpdk/pci_idxd.o 00:03:12.488 LIB libspdk_idxd.a 00:03:12.488 LIB libspdk_vmd.a 00:03:12.488 CC lib/env_dpdk/pci_event.o 00:03:12.488 CC lib/env_dpdk/sigbus_handler.o 00:03:12.488 CC lib/env_dpdk/pci_dpdk.o 00:03:12.488 SO libspdk_idxd.so.12.0 00:03:12.488 SO libspdk_vmd.so.6.0 00:03:12.488 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:12.488 SYMLINK libspdk_idxd.so 00:03:12.488 SYMLINK libspdk_vmd.so 00:03:12.488 CC lib/jsonrpc/jsonrpc_server.o 00:03:12.488 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:12.488 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:12.488 CC lib/jsonrpc/jsonrpc_client.o 00:03:12.746 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:12.746 LIB libspdk_jsonrpc.a 00:03:13.005 SO libspdk_jsonrpc.so.6.0 00:03:13.005 SYMLINK libspdk_jsonrpc.so 00:03:13.263 CC lib/rpc/rpc.o 00:03:13.263 LIB libspdk_env_dpdk.a 00:03:13.544 SO libspdk_env_dpdk.so.14.1 00:03:13.545 LIB libspdk_rpc.a 00:03:13.545 SO libspdk_rpc.so.6.0 00:03:13.545 SYMLINK libspdk_env_dpdk.so 00:03:13.545 SYMLINK libspdk_rpc.so 00:03:13.803 CC lib/keyring/keyring_rpc.o 00:03:13.803 CC lib/notify/notify_rpc.o 00:03:13.803 CC lib/keyring/keyring.o 00:03:13.803 CC lib/notify/notify.o 00:03:13.803 CC lib/trace/trace_flags.o 00:03:13.803 CC lib/trace/trace.o 00:03:13.803 CC lib/trace/trace_rpc.o 00:03:14.063 LIB libspdk_notify.a 00:03:14.063 SO libspdk_notify.so.6.0 00:03:14.063 LIB libspdk_keyring.a 00:03:14.063 SYMLINK libspdk_notify.so 00:03:14.063 LIB libspdk_trace.a 00:03:14.321 SO libspdk_keyring.so.1.0 00:03:14.321 SO libspdk_trace.so.10.0 00:03:14.321 SYMLINK libspdk_keyring.so 00:03:14.321 SYMLINK libspdk_trace.so 00:03:14.580 CC lib/thread/thread.o 00:03:14.580 CC lib/thread/iobuf.o 00:03:14.580 CC lib/sock/sock.o 00:03:14.580 CC lib/sock/sock_rpc.o 00:03:15.146 LIB libspdk_sock.a 00:03:15.146 SO libspdk_sock.so.10.0 00:03:15.146 SYMLINK libspdk_sock.so 00:03:15.405 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:15.405 CC lib/nvme/nvme_ctrlr.o 00:03:15.405 CC lib/nvme/nvme_fabric.o 00:03:15.405 CC lib/nvme/nvme_ns_cmd.o 00:03:15.405 CC lib/nvme/nvme_pcie_common.o 00:03:15.405 CC lib/nvme/nvme_ns.o 00:03:15.405 CC lib/nvme/nvme_pcie.o 00:03:15.405 CC lib/nvme/nvme_qpair.o 00:03:15.405 CC lib/nvme/nvme.o 00:03:16.337 LIB libspdk_thread.a 00:03:16.337 SO libspdk_thread.so.10.1 00:03:16.337 CC lib/nvme/nvme_quirks.o 00:03:16.337 CC lib/nvme/nvme_transport.o 00:03:16.337 SYMLINK libspdk_thread.so 00:03:16.337 CC lib/nvme/nvme_discovery.o 00:03:16.337 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:16.338 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:16.338 CC lib/nvme/nvme_tcp.o 00:03:16.338 CC lib/nvme/nvme_opal.o 00:03:16.595 CC lib/accel/accel.o 00:03:16.595 CC lib/nvme/nvme_io_msg.o 00:03:16.851 CC lib/nvme/nvme_poll_group.o 00:03:16.851 CC lib/accel/accel_rpc.o 00:03:16.851 CC lib/accel/accel_sw.o 00:03:17.109 CC lib/blob/blobstore.o 00:03:17.109 CC lib/nvme/nvme_zns.o 00:03:17.109 CC lib/nvme/nvme_stubs.o 00:03:17.109 CC lib/nvme/nvme_auth.o 00:03:17.367 CC lib/nvme/nvme_cuse.o 00:03:17.367 CC lib/init/json_config.o 00:03:17.625 LIB libspdk_accel.a 00:03:17.625 SO libspdk_accel.so.15.1 00:03:17.625 CC lib/init/subsystem.o 00:03:17.625 CC lib/init/subsystem_rpc.o 00:03:17.625 CC lib/init/rpc.o 00:03:17.625 SYMLINK libspdk_accel.so 00:03:17.883 CC lib/nvme/nvme_rdma.o 00:03:17.883 CC lib/blob/request.o 00:03:17.883 CC lib/blob/zeroes.o 00:03:17.883 CC lib/blob/blob_bs_dev.o 00:03:17.883 LIB libspdk_init.a 00:03:17.883 SO libspdk_init.so.5.0 00:03:17.883 CC lib/virtio/virtio.o 00:03:17.883 CC lib/virtio/virtio_vhost_user.o 00:03:18.141 SYMLINK libspdk_init.so 00:03:18.141 CC lib/virtio/virtio_vfio_user.o 00:03:18.141 CC lib/virtio/virtio_pci.o 00:03:18.141 CC lib/bdev/bdev.o 00:03:18.141 CC lib/bdev/bdev_rpc.o 00:03:18.141 CC lib/bdev/bdev_zone.o 00:03:18.399 CC lib/bdev/part.o 00:03:18.399 CC lib/event/app.o 00:03:18.399 CC lib/event/reactor.o 00:03:18.399 CC lib/bdev/scsi_nvme.o 00:03:18.399 CC lib/event/log_rpc.o 00:03:18.399 LIB libspdk_virtio.a 00:03:18.399 SO libspdk_virtio.so.7.0 00:03:18.656 CC lib/event/app_rpc.o 00:03:18.656 CC lib/event/scheduler_static.o 00:03:18.656 SYMLINK libspdk_virtio.so 00:03:18.921 LIB libspdk_event.a 00:03:18.921 SO libspdk_event.so.14.0 00:03:18.921 SYMLINK libspdk_event.so 00:03:19.193 LIB libspdk_nvme.a 00:03:19.450 SO libspdk_nvme.so.13.1 00:03:19.709 SYMLINK libspdk_nvme.so 00:03:20.276 LIB libspdk_blob.a 00:03:20.276 SO libspdk_blob.so.11.0 00:03:20.276 SYMLINK libspdk_blob.so 00:03:20.535 CC lib/blobfs/blobfs.o 00:03:20.535 CC lib/blobfs/tree.o 00:03:20.535 CC lib/lvol/lvol.o 00:03:20.794 LIB libspdk_bdev.a 00:03:21.053 SO libspdk_bdev.so.15.1 00:03:21.053 SYMLINK libspdk_bdev.so 00:03:21.312 CC lib/ublk/ublk.o 00:03:21.312 CC lib/ublk/ublk_rpc.o 00:03:21.312 CC lib/ftl/ftl_core.o 00:03:21.312 CC lib/nbd/nbd.o 00:03:21.312 CC lib/ftl/ftl_init.o 00:03:21.312 CC lib/nbd/nbd_rpc.o 00:03:21.312 CC lib/nvmf/ctrlr.o 00:03:21.312 CC lib/scsi/dev.o 00:03:21.571 CC lib/scsi/lun.o 00:03:21.571 CC lib/scsi/port.o 00:03:21.571 CC lib/scsi/scsi.o 00:03:21.571 LIB libspdk_blobfs.a 00:03:21.571 SO libspdk_blobfs.so.10.0 00:03:21.571 LIB libspdk_lvol.a 00:03:21.571 CC lib/scsi/scsi_bdev.o 00:03:21.571 SO libspdk_lvol.so.10.0 00:03:21.571 SYMLINK libspdk_blobfs.so 00:03:21.571 CC lib/scsi/scsi_pr.o 00:03:21.571 CC lib/ftl/ftl_layout.o 00:03:21.571 CC lib/scsi/scsi_rpc.o 00:03:21.830 CC lib/scsi/task.o 00:03:21.830 SYMLINK libspdk_lvol.so 00:03:21.830 CC lib/nvmf/ctrlr_discovery.o 00:03:21.830 LIB libspdk_nbd.a 00:03:21.830 SO libspdk_nbd.so.7.0 00:03:21.830 CC lib/ftl/ftl_debug.o 00:03:21.830 CC lib/nvmf/ctrlr_bdev.o 00:03:21.830 SYMLINK libspdk_nbd.so 00:03:21.830 CC lib/nvmf/subsystem.o 00:03:21.830 CC lib/ftl/ftl_io.o 00:03:22.088 LIB libspdk_ublk.a 00:03:22.088 CC lib/ftl/ftl_sb.o 00:03:22.088 CC lib/ftl/ftl_l2p.o 00:03:22.088 SO libspdk_ublk.so.3.0 00:03:22.088 CC lib/ftl/ftl_l2p_flat.o 00:03:22.088 SYMLINK libspdk_ublk.so 00:03:22.088 CC lib/ftl/ftl_nv_cache.o 00:03:22.088 LIB libspdk_scsi.a 00:03:22.088 CC lib/ftl/ftl_band.o 00:03:22.088 SO libspdk_scsi.so.9.0 00:03:22.347 CC lib/ftl/ftl_band_ops.o 00:03:22.347 CC lib/nvmf/nvmf.o 00:03:22.347 CC lib/ftl/ftl_writer.o 00:03:22.347 SYMLINK libspdk_scsi.so 00:03:22.347 CC lib/ftl/ftl_rq.o 00:03:22.347 CC lib/nvmf/nvmf_rpc.o 00:03:22.606 CC lib/ftl/ftl_reloc.o 00:03:22.606 CC lib/nvmf/transport.o 00:03:22.606 CC lib/nvmf/tcp.o 00:03:22.606 CC lib/ftl/ftl_l2p_cache.o 00:03:22.606 CC lib/ftl/ftl_p2l.o 00:03:23.171 CC lib/iscsi/conn.o 00:03:23.171 CC lib/iscsi/init_grp.o 00:03:23.171 CC lib/iscsi/iscsi.o 00:03:23.171 CC lib/iscsi/md5.o 00:03:23.171 CC lib/vhost/vhost.o 00:03:23.171 CC lib/vhost/vhost_rpc.o 00:03:23.171 CC lib/ftl/mngt/ftl_mngt.o 00:03:23.171 CC lib/vhost/vhost_scsi.o 00:03:23.429 CC lib/iscsi/param.o 00:03:23.429 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:23.429 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:23.429 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:23.687 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:23.687 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:23.687 CC lib/iscsi/portal_grp.o 00:03:23.687 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:23.687 CC lib/iscsi/tgt_node.o 00:03:23.946 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:23.946 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:23.946 CC lib/iscsi/iscsi_subsystem.o 00:03:23.946 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:23.946 CC lib/iscsi/iscsi_rpc.o 00:03:23.946 CC lib/iscsi/task.o 00:03:23.946 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.205 CC lib/nvmf/stubs.o 00:03:24.205 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.205 CC lib/vhost/vhost_blk.o 00:03:24.205 CC lib/nvmf/mdns_server.o 00:03:24.205 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.205 CC lib/ftl/utils/ftl_conf.o 00:03:24.205 CC lib/ftl/utils/ftl_md.o 00:03:24.494 CC lib/nvmf/rdma.o 00:03:24.494 CC lib/ftl/utils/ftl_mempool.o 00:03:24.494 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.494 CC lib/ftl/utils/ftl_property.o 00:03:24.494 LIB libspdk_iscsi.a 00:03:24.494 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.494 CC lib/vhost/rte_vhost_user.o 00:03:24.494 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.494 CC lib/nvmf/auth.o 00:03:24.767 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.767 SO libspdk_iscsi.so.8.0 00:03:24.767 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.767 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.767 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.767 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.767 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.767 SYMLINK libspdk_iscsi.so 00:03:25.026 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:25.026 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:25.026 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:25.026 CC lib/ftl/base/ftl_base_dev.o 00:03:25.026 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.026 CC lib/ftl/ftl_trace.o 00:03:25.285 LIB libspdk_ftl.a 00:03:25.543 SO libspdk_ftl.so.9.0 00:03:25.801 LIB libspdk_vhost.a 00:03:25.801 SO libspdk_vhost.so.8.0 00:03:26.060 SYMLINK libspdk_vhost.so 00:03:26.060 SYMLINK libspdk_ftl.so 00:03:26.317 LIB libspdk_nvmf.a 00:03:26.576 SO libspdk_nvmf.so.18.1 00:03:26.835 SYMLINK libspdk_nvmf.so 00:03:27.094 CC module/env_dpdk/env_dpdk_rpc.o 00:03:27.352 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:27.352 CC module/accel/dsa/accel_dsa.o 00:03:27.352 CC module/blob/bdev/blob_bdev.o 00:03:27.352 CC module/accel/error/accel_error.o 00:03:27.352 CC module/sock/posix/posix.o 00:03:27.352 CC module/accel/ioat/accel_ioat.o 00:03:27.352 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.352 CC module/keyring/file/keyring.o 00:03:27.352 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:27.352 LIB libspdk_env_dpdk_rpc.a 00:03:27.352 SO libspdk_env_dpdk_rpc.so.6.0 00:03:27.352 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.610 SYMLINK libspdk_env_dpdk_rpc.so 00:03:27.610 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.610 CC module/keyring/file/keyring_rpc.o 00:03:27.610 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:27.610 CC module/accel/error/accel_error_rpc.o 00:03:27.610 LIB libspdk_scheduler_gscheduler.a 00:03:27.610 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:27.610 CC module/accel/dsa/accel_dsa_rpc.o 00:03:27.610 SO libspdk_scheduler_gscheduler.so.4.0 00:03:27.610 LIB libspdk_scheduler_dynamic.a 00:03:27.610 LIB libspdk_blob_bdev.a 00:03:27.610 LIB libspdk_accel_ioat.a 00:03:27.610 SO libspdk_scheduler_dynamic.so.4.0 00:03:27.610 SO libspdk_blob_bdev.so.11.0 00:03:27.610 LIB libspdk_keyring_file.a 00:03:27.610 SO libspdk_accel_ioat.so.6.0 00:03:27.610 SYMLINK libspdk_scheduler_gscheduler.so 00:03:27.610 SYMLINK libspdk_scheduler_dynamic.so 00:03:27.610 LIB libspdk_accel_error.a 00:03:27.610 SO libspdk_keyring_file.so.1.0 00:03:27.610 SYMLINK libspdk_blob_bdev.so 00:03:27.868 LIB libspdk_accel_dsa.a 00:03:27.868 SYMLINK libspdk_accel_ioat.so 00:03:27.868 CC module/sock/uring/uring.o 00:03:27.868 SO libspdk_accel_error.so.2.0 00:03:27.868 CC module/keyring/linux/keyring.o 00:03:27.868 SO libspdk_accel_dsa.so.5.0 00:03:27.868 SYMLINK libspdk_keyring_file.so 00:03:27.868 CC module/keyring/linux/keyring_rpc.o 00:03:27.868 SYMLINK libspdk_accel_error.so 00:03:27.868 SYMLINK libspdk_accel_dsa.so 00:03:27.868 CC module/accel/iaa/accel_iaa.o 00:03:27.868 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.868 LIB libspdk_keyring_linux.a 00:03:28.125 SO libspdk_keyring_linux.so.1.0 00:03:28.125 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.125 CC module/bdev/gpt/gpt.o 00:03:28.125 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.125 CC module/bdev/error/vbdev_error.o 00:03:28.125 CC module/bdev/delay/vbdev_delay.o 00:03:28.125 SYMLINK libspdk_keyring_linux.so 00:03:28.125 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:28.125 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.125 LIB libspdk_sock_posix.a 00:03:28.125 LIB libspdk_accel_iaa.a 00:03:28.125 SO libspdk_accel_iaa.so.3.0 00:03:28.125 SO libspdk_sock_posix.so.6.0 00:03:28.383 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.383 SYMLINK libspdk_accel_iaa.so 00:03:28.383 SYMLINK libspdk_sock_posix.so 00:03:28.383 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:28.383 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.383 LIB libspdk_blobfs_bdev.a 00:03:28.383 SO libspdk_blobfs_bdev.so.6.0 00:03:28.383 SYMLINK libspdk_blobfs_bdev.so 00:03:28.383 LIB libspdk_bdev_delay.a 00:03:28.383 CC module/bdev/malloc/bdev_malloc.o 00:03:28.383 LIB libspdk_sock_uring.a 00:03:28.640 SO libspdk_bdev_delay.so.6.0 00:03:28.640 CC module/bdev/null/bdev_null.o 00:03:28.640 LIB libspdk_bdev_error.a 00:03:28.640 SO libspdk_sock_uring.so.5.0 00:03:28.640 CC module/bdev/nvme/bdev_nvme.o 00:03:28.640 SO libspdk_bdev_error.so.6.0 00:03:28.640 LIB libspdk_bdev_gpt.a 00:03:28.640 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.640 SYMLINK libspdk_bdev_delay.so 00:03:28.640 SO libspdk_bdev_gpt.so.6.0 00:03:28.640 SYMLINK libspdk_sock_uring.so 00:03:28.640 SYMLINK libspdk_bdev_error.so 00:03:28.640 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.640 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.640 SYMLINK libspdk_bdev_gpt.so 00:03:28.640 LIB libspdk_bdev_lvol.a 00:03:28.640 SO libspdk_bdev_lvol.so.6.0 00:03:28.896 SYMLINK libspdk_bdev_lvol.so 00:03:28.896 CC module/bdev/split/vbdev_split.o 00:03:28.896 CC module/bdev/null/bdev_null_rpc.o 00:03:28.896 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.896 CC module/bdev/raid/bdev_raid.o 00:03:28.896 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.896 LIB libspdk_bdev_malloc.a 00:03:28.896 SO libspdk_bdev_malloc.so.6.0 00:03:28.896 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.896 SYMLINK libspdk_bdev_malloc.so 00:03:28.896 CC module/bdev/uring/bdev_uring.o 00:03:28.896 LIB libspdk_bdev_null.a 00:03:29.152 LIB libspdk_bdev_passthru.a 00:03:29.152 SO libspdk_bdev_null.so.6.0 00:03:29.152 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.152 SO libspdk_bdev_passthru.so.6.0 00:03:29.152 SYMLINK libspdk_bdev_null.so 00:03:29.152 CC module/bdev/aio/bdev_aio.o 00:03:29.152 SYMLINK libspdk_bdev_passthru.so 00:03:29.152 LIB libspdk_bdev_zone_block.a 00:03:29.152 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.152 SO libspdk_bdev_zone_block.so.6.0 00:03:29.410 LIB libspdk_bdev_split.a 00:03:29.410 SYMLINK libspdk_bdev_zone_block.so 00:03:29.410 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.410 SO libspdk_bdev_split.so.6.0 00:03:29.410 CC module/bdev/ftl/bdev_ftl.o 00:03:29.410 CC module/bdev/iscsi/bdev_iscsi.o 00:03:29.410 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:29.410 CC module/bdev/uring/bdev_uring_rpc.o 00:03:29.410 SYMLINK libspdk_bdev_split.so 00:03:29.410 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:29.410 CC module/bdev/raid/raid0.o 00:03:29.668 CC module/bdev/aio/bdev_aio_rpc.o 00:03:29.668 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.668 LIB libspdk_bdev_uring.a 00:03:29.668 CC module/bdev/raid/raid1.o 00:03:29.668 SO libspdk_bdev_uring.so.6.0 00:03:29.668 CC module/bdev/nvme/nvme_rpc.o 00:03:29.668 SYMLINK libspdk_bdev_uring.so 00:03:29.668 CC module/bdev/nvme/bdev_mdns_client.o 00:03:29.668 CC module/bdev/nvme/vbdev_opal.o 00:03:29.668 LIB libspdk_bdev_aio.a 00:03:29.668 LIB libspdk_bdev_iscsi.a 00:03:29.668 SO libspdk_bdev_aio.so.6.0 00:03:29.668 SO libspdk_bdev_iscsi.so.6.0 00:03:29.926 LIB libspdk_bdev_ftl.a 00:03:29.926 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:29.926 SYMLINK libspdk_bdev_aio.so 00:03:29.926 SYMLINK libspdk_bdev_iscsi.so 00:03:29.926 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:29.926 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.926 SO libspdk_bdev_ftl.so.6.0 00:03:29.926 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.926 CC module/bdev/raid/concat.o 00:03:29.926 SYMLINK libspdk_bdev_ftl.so 00:03:30.184 LIB libspdk_bdev_virtio.a 00:03:30.184 LIB libspdk_bdev_raid.a 00:03:30.184 SO libspdk_bdev_virtio.so.6.0 00:03:30.184 SO libspdk_bdev_raid.so.6.0 00:03:30.184 SYMLINK libspdk_bdev_virtio.so 00:03:30.184 SYMLINK libspdk_bdev_raid.so 00:03:31.116 LIB libspdk_bdev_nvme.a 00:03:31.116 SO libspdk_bdev_nvme.so.7.0 00:03:31.116 SYMLINK libspdk_bdev_nvme.so 00:03:31.682 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.682 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.682 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.682 CC module/event/subsystems/vmd/vmd.o 00:03:31.682 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.682 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.682 CC module/event/subsystems/sock/sock.o 00:03:31.682 CC module/event/subsystems/keyring/keyring.o 00:03:31.682 LIB libspdk_event_scheduler.a 00:03:31.682 LIB libspdk_event_vhost_blk.a 00:03:31.682 LIB libspdk_event_keyring.a 00:03:31.682 LIB libspdk_event_sock.a 00:03:31.682 LIB libspdk_event_vmd.a 00:03:31.941 LIB libspdk_event_iobuf.a 00:03:31.941 SO libspdk_event_vhost_blk.so.3.0 00:03:31.941 SO libspdk_event_scheduler.so.4.0 00:03:31.941 SO libspdk_event_keyring.so.1.0 00:03:31.941 SO libspdk_event_sock.so.5.0 00:03:31.941 SO libspdk_event_vmd.so.6.0 00:03:31.941 SO libspdk_event_iobuf.so.3.0 00:03:31.941 SYMLINK libspdk_event_keyring.so 00:03:31.941 SYMLINK libspdk_event_scheduler.so 00:03:31.941 SYMLINK libspdk_event_vhost_blk.so 00:03:31.941 SYMLINK libspdk_event_sock.so 00:03:31.941 SYMLINK libspdk_event_iobuf.so 00:03:31.941 SYMLINK libspdk_event_vmd.so 00:03:32.199 CC module/event/subsystems/accel/accel.o 00:03:32.458 LIB libspdk_event_accel.a 00:03:32.458 SO libspdk_event_accel.so.6.0 00:03:32.458 SYMLINK libspdk_event_accel.so 00:03:32.716 CC module/event/subsystems/bdev/bdev.o 00:03:32.974 LIB libspdk_event_bdev.a 00:03:32.974 SO libspdk_event_bdev.so.6.0 00:03:33.232 SYMLINK libspdk_event_bdev.so 00:03:33.232 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.232 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.232 CC module/event/subsystems/ublk/ublk.o 00:03:33.232 CC module/event/subsystems/scsi/scsi.o 00:03:33.232 CC module/event/subsystems/nbd/nbd.o 00:03:33.491 LIB libspdk_event_ublk.a 00:03:33.491 LIB libspdk_event_nbd.a 00:03:33.491 LIB libspdk_event_scsi.a 00:03:33.491 SO libspdk_event_ublk.so.3.0 00:03:33.491 SO libspdk_event_nbd.so.6.0 00:03:33.491 SO libspdk_event_scsi.so.6.0 00:03:33.749 SYMLINK libspdk_event_ublk.so 00:03:33.749 SYMLINK libspdk_event_nbd.so 00:03:33.749 LIB libspdk_event_nvmf.a 00:03:33.749 SYMLINK libspdk_event_scsi.so 00:03:33.749 SO libspdk_event_nvmf.so.6.0 00:03:33.749 SYMLINK libspdk_event_nvmf.so 00:03:34.007 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:34.007 CC module/event/subsystems/iscsi/iscsi.o 00:03:34.007 LIB libspdk_event_vhost_scsi.a 00:03:34.007 LIB libspdk_event_iscsi.a 00:03:34.007 SO libspdk_event_vhost_scsi.so.3.0 00:03:34.266 SO libspdk_event_iscsi.so.6.0 00:03:34.266 SYMLINK libspdk_event_vhost_scsi.so 00:03:34.266 SYMLINK libspdk_event_iscsi.so 00:03:34.266 SO libspdk.so.6.0 00:03:34.266 SYMLINK libspdk.so 00:03:34.524 CXX app/trace/trace.o 00:03:34.524 CC app/spdk_lspci/spdk_lspci.o 00:03:34.524 CC app/trace_record/trace_record.o 00:03:34.524 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.781 CC app/nvmf_tgt/nvmf_main.o 00:03:34.781 CC app/iscsi_tgt/iscsi_tgt.o 00:03:34.781 CC examples/ioat/perf/perf.o 00:03:34.781 CC app/spdk_tgt/spdk_tgt.o 00:03:34.781 CC examples/util/zipf/zipf.o 00:03:34.781 CC test/thread/poller_perf/poller_perf.o 00:03:34.781 LINK spdk_lspci 00:03:34.781 LINK nvmf_tgt 00:03:34.781 LINK interrupt_tgt 00:03:35.038 LINK zipf 00:03:35.038 LINK spdk_trace_record 00:03:35.038 LINK iscsi_tgt 00:03:35.038 LINK ioat_perf 00:03:35.038 LINK poller_perf 00:03:35.038 LINK spdk_tgt 00:03:35.038 LINK spdk_trace 00:03:35.297 CC app/spdk_nvme_perf/perf.o 00:03:35.297 CC examples/ioat/verify/verify.o 00:03:35.297 TEST_HEADER include/spdk/accel.h 00:03:35.297 TEST_HEADER include/spdk/accel_module.h 00:03:35.297 TEST_HEADER include/spdk/assert.h 00:03:35.297 TEST_HEADER include/spdk/barrier.h 00:03:35.297 TEST_HEADER include/spdk/base64.h 00:03:35.297 TEST_HEADER include/spdk/bdev.h 00:03:35.297 TEST_HEADER include/spdk/bdev_module.h 00:03:35.297 TEST_HEADER include/spdk/bdev_zone.h 00:03:35.297 TEST_HEADER include/spdk/bit_array.h 00:03:35.297 TEST_HEADER include/spdk/bit_pool.h 00:03:35.297 TEST_HEADER include/spdk/blob_bdev.h 00:03:35.297 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:35.297 TEST_HEADER include/spdk/blobfs.h 00:03:35.297 TEST_HEADER include/spdk/blob.h 00:03:35.297 TEST_HEADER include/spdk/conf.h 00:03:35.297 TEST_HEADER include/spdk/config.h 00:03:35.297 TEST_HEADER include/spdk/cpuset.h 00:03:35.297 CC examples/sock/hello_world/hello_sock.o 00:03:35.297 TEST_HEADER include/spdk/crc16.h 00:03:35.297 TEST_HEADER include/spdk/crc32.h 00:03:35.297 TEST_HEADER include/spdk/crc64.h 00:03:35.297 TEST_HEADER include/spdk/dif.h 00:03:35.298 CC test/dma/test_dma/test_dma.o 00:03:35.298 TEST_HEADER include/spdk/dma.h 00:03:35.298 TEST_HEADER include/spdk/endian.h 00:03:35.298 TEST_HEADER include/spdk/env_dpdk.h 00:03:35.298 TEST_HEADER include/spdk/env.h 00:03:35.298 TEST_HEADER include/spdk/event.h 00:03:35.298 TEST_HEADER include/spdk/fd_group.h 00:03:35.298 TEST_HEADER include/spdk/fd.h 00:03:35.298 TEST_HEADER include/spdk/file.h 00:03:35.298 CC app/spdk_nvme_identify/identify.o 00:03:35.298 TEST_HEADER include/spdk/ftl.h 00:03:35.298 CC app/spdk_nvme_discover/discovery_aer.o 00:03:35.298 TEST_HEADER include/spdk/gpt_spec.h 00:03:35.298 TEST_HEADER include/spdk/hexlify.h 00:03:35.298 TEST_HEADER include/spdk/histogram_data.h 00:03:35.298 TEST_HEADER include/spdk/idxd.h 00:03:35.298 CC examples/thread/thread/thread_ex.o 00:03:35.298 TEST_HEADER include/spdk/idxd_spec.h 00:03:35.298 TEST_HEADER include/spdk/init.h 00:03:35.298 TEST_HEADER include/spdk/ioat.h 00:03:35.298 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.298 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.298 TEST_HEADER include/spdk/json.h 00:03:35.298 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.298 TEST_HEADER include/spdk/keyring.h 00:03:35.298 TEST_HEADER include/spdk/keyring_module.h 00:03:35.298 TEST_HEADER include/spdk/likely.h 00:03:35.298 TEST_HEADER include/spdk/log.h 00:03:35.298 TEST_HEADER include/spdk/lvol.h 00:03:35.298 TEST_HEADER include/spdk/memory.h 00:03:35.298 TEST_HEADER include/spdk/mmio.h 00:03:35.298 TEST_HEADER include/spdk/nbd.h 00:03:35.556 TEST_HEADER include/spdk/notify.h 00:03:35.556 TEST_HEADER include/spdk/nvme.h 00:03:35.556 TEST_HEADER include/spdk/nvme_intel.h 00:03:35.556 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:35.556 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:35.556 TEST_HEADER include/spdk/nvme_spec.h 00:03:35.556 TEST_HEADER include/spdk/nvme_zns.h 00:03:35.556 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:35.556 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:35.556 CC test/app/bdev_svc/bdev_svc.o 00:03:35.556 TEST_HEADER include/spdk/nvmf.h 00:03:35.556 TEST_HEADER include/spdk/nvmf_spec.h 00:03:35.556 TEST_HEADER include/spdk/nvmf_transport.h 00:03:35.556 TEST_HEADER include/spdk/opal.h 00:03:35.556 TEST_HEADER include/spdk/opal_spec.h 00:03:35.556 TEST_HEADER include/spdk/pci_ids.h 00:03:35.556 TEST_HEADER include/spdk/pipe.h 00:03:35.556 TEST_HEADER include/spdk/queue.h 00:03:35.556 CC app/spdk_top/spdk_top.o 00:03:35.556 TEST_HEADER include/spdk/reduce.h 00:03:35.556 TEST_HEADER include/spdk/rpc.h 00:03:35.556 TEST_HEADER include/spdk/scheduler.h 00:03:35.556 TEST_HEADER include/spdk/scsi.h 00:03:35.556 TEST_HEADER include/spdk/scsi_spec.h 00:03:35.556 TEST_HEADER include/spdk/sock.h 00:03:35.556 TEST_HEADER include/spdk/stdinc.h 00:03:35.556 TEST_HEADER include/spdk/string.h 00:03:35.556 TEST_HEADER include/spdk/thread.h 00:03:35.556 TEST_HEADER include/spdk/trace.h 00:03:35.556 TEST_HEADER include/spdk/trace_parser.h 00:03:35.556 TEST_HEADER include/spdk/tree.h 00:03:35.556 TEST_HEADER include/spdk/ublk.h 00:03:35.556 TEST_HEADER include/spdk/util.h 00:03:35.556 TEST_HEADER include/spdk/uuid.h 00:03:35.556 TEST_HEADER include/spdk/version.h 00:03:35.556 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:35.556 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:35.556 TEST_HEADER include/spdk/vhost.h 00:03:35.556 TEST_HEADER include/spdk/vmd.h 00:03:35.556 TEST_HEADER include/spdk/xor.h 00:03:35.556 TEST_HEADER include/spdk/zipf.h 00:03:35.556 LINK verify 00:03:35.556 CXX test/cpp_headers/accel.o 00:03:35.556 LINK spdk_nvme_discover 00:03:35.556 LINK hello_sock 00:03:35.556 LINK bdev_svc 00:03:35.556 LINK thread 00:03:35.814 CXX test/cpp_headers/accel_module.o 00:03:35.814 LINK test_dma 00:03:35.814 CXX test/cpp_headers/assert.o 00:03:35.814 CC app/vhost/vhost.o 00:03:36.146 CC test/event/event_perf/event_perf.o 00:03:36.146 CC test/env/mem_callbacks/mem_callbacks.o 00:03:36.146 CC examples/vmd/lsvmd/lsvmd.o 00:03:36.146 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:36.146 CXX test/cpp_headers/barrier.o 00:03:36.146 CC examples/vmd/led/led.o 00:03:36.146 LINK event_perf 00:03:36.146 LINK vhost 00:03:36.146 LINK spdk_nvme_perf 00:03:36.146 LINK spdk_nvme_identify 00:03:36.146 LINK lsvmd 00:03:36.407 CXX test/cpp_headers/base64.o 00:03:36.407 LINK led 00:03:36.407 LINK spdk_top 00:03:36.407 CC test/event/reactor/reactor.o 00:03:36.408 CC test/event/reactor_perf/reactor_perf.o 00:03:36.408 CXX test/cpp_headers/bdev.o 00:03:36.408 LINK nvme_fuzz 00:03:36.408 CC test/rpc_client/rpc_client_test.o 00:03:36.408 CC test/event/app_repeat/app_repeat.o 00:03:36.408 CC test/event/scheduler/scheduler.o 00:03:36.666 LINK reactor 00:03:36.666 LINK reactor_perf 00:03:36.666 LINK mem_callbacks 00:03:36.666 CC app/spdk_dd/spdk_dd.o 00:03:36.666 CXX test/cpp_headers/bdev_module.o 00:03:36.666 LINK app_repeat 00:03:36.666 CC examples/idxd/perf/perf.o 00:03:36.666 LINK rpc_client_test 00:03:36.666 LINK scheduler 00:03:36.666 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:36.924 CC test/env/vtophys/vtophys.o 00:03:36.924 CC test/app/histogram_perf/histogram_perf.o 00:03:36.924 CXX test/cpp_headers/bdev_zone.o 00:03:36.924 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.924 CC test/app/jsoncat/jsoncat.o 00:03:36.924 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:36.924 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:36.924 LINK histogram_perf 00:03:37.183 LINK vtophys 00:03:37.183 LINK idxd_perf 00:03:37.183 LINK env_dpdk_post_init 00:03:37.183 CXX test/cpp_headers/bit_array.o 00:03:37.183 LINK jsoncat 00:03:37.183 LINK spdk_dd 00:03:37.441 CXX test/cpp_headers/bit_pool.o 00:03:37.441 CC test/app/stub/stub.o 00:03:37.441 CC test/env/memory/memory_ut.o 00:03:37.441 CC examples/accel/perf/accel_perf.o 00:03:37.441 CC test/env/pci/pci_ut.o 00:03:37.441 LINK vhost_fuzz 00:03:37.441 CXX test/cpp_headers/blob_bdev.o 00:03:37.441 CC test/accel/dif/dif.o 00:03:37.441 LINK stub 00:03:37.441 CC test/blobfs/mkfs/mkfs.o 00:03:37.699 CC app/fio/nvme/fio_plugin.o 00:03:37.699 CXX test/cpp_headers/blobfs_bdev.o 00:03:37.699 CXX test/cpp_headers/blobfs.o 00:03:37.957 CC test/lvol/esnap/esnap.o 00:03:37.957 LINK mkfs 00:03:37.957 LINK pci_ut 00:03:37.957 LINK accel_perf 00:03:37.957 CXX test/cpp_headers/blob.o 00:03:37.957 LINK dif 00:03:38.213 CXX test/cpp_headers/conf.o 00:03:38.213 LINK spdk_nvme 00:03:38.470 CC test/nvme/aer/aer.o 00:03:38.470 CXX test/cpp_headers/config.o 00:03:38.470 CC examples/nvme/hello_world/hello_world.o 00:03:38.470 CC test/nvme/reset/reset.o 00:03:38.470 CC examples/blob/hello_world/hello_blob.o 00:03:38.470 CXX test/cpp_headers/cpuset.o 00:03:38.470 CC app/fio/bdev/fio_plugin.o 00:03:38.727 LINK iscsi_fuzz 00:03:38.727 CC examples/blob/cli/blobcli.o 00:03:38.727 CXX test/cpp_headers/crc16.o 00:03:38.727 LINK memory_ut 00:03:38.727 LINK hello_world 00:03:38.727 LINK aer 00:03:38.727 LINK reset 00:03:38.727 LINK hello_blob 00:03:38.984 CXX test/cpp_headers/crc32.o 00:03:38.984 CXX test/cpp_headers/crc64.o 00:03:38.984 CC examples/nvme/reconnect/reconnect.o 00:03:38.984 CXX test/cpp_headers/dif.o 00:03:38.984 LINK spdk_bdev 00:03:39.242 CC test/nvme/sgl/sgl.o 00:03:39.242 CC test/nvme/e2edp/nvme_dp.o 00:03:39.242 CC test/nvme/overhead/overhead.o 00:03:39.242 LINK blobcli 00:03:39.242 CC test/bdev/bdevio/bdevio.o 00:03:39.242 CC examples/bdev/hello_world/hello_bdev.o 00:03:39.242 CXX test/cpp_headers/dma.o 00:03:39.499 CC examples/bdev/bdevperf/bdevperf.o 00:03:39.499 LINK reconnect 00:03:39.499 LINK sgl 00:03:39.499 CXX test/cpp_headers/endian.o 00:03:39.499 LINK nvme_dp 00:03:39.499 LINK overhead 00:03:39.499 LINK hello_bdev 00:03:39.499 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.499 LINK bdevio 00:03:39.499 CXX test/cpp_headers/env_dpdk.o 00:03:39.756 CXX test/cpp_headers/env.o 00:03:39.756 CXX test/cpp_headers/event.o 00:03:39.756 CC test/nvme/err_injection/err_injection.o 00:03:39.756 CXX test/cpp_headers/fd_group.o 00:03:39.756 CC test/nvme/startup/startup.o 00:03:39.756 CXX test/cpp_headers/fd.o 00:03:40.013 CC examples/nvme/arbitration/arbitration.o 00:03:40.013 LINK startup 00:03:40.013 LINK err_injection 00:03:40.013 CC examples/nvme/hotplug/hotplug.o 00:03:40.013 CC test/nvme/reserve/reserve.o 00:03:40.013 CXX test/cpp_headers/file.o 00:03:40.013 CC test/nvme/simple_copy/simple_copy.o 00:03:40.013 LINK nvme_manage 00:03:40.272 LINK bdevperf 00:03:40.272 CXX test/cpp_headers/ftl.o 00:03:40.272 CC test/nvme/connect_stress/connect_stress.o 00:03:40.272 LINK reserve 00:03:40.272 LINK hotplug 00:03:40.272 CC test/nvme/boot_partition/boot_partition.o 00:03:40.272 LINK simple_copy 00:03:40.272 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.272 LINK arbitration 00:03:40.272 CXX test/cpp_headers/gpt_spec.o 00:03:40.272 LINK connect_stress 00:03:40.529 CXX test/cpp_headers/hexlify.o 00:03:40.529 LINK boot_partition 00:03:40.529 CC test/nvme/compliance/nvme_compliance.o 00:03:40.529 CC test/nvme/fused_ordering/fused_ordering.o 00:03:40.529 LINK cmb_copy 00:03:40.529 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:40.529 CXX test/cpp_headers/histogram_data.o 00:03:40.529 CXX test/cpp_headers/idxd.o 00:03:40.529 CXX test/cpp_headers/idxd_spec.o 00:03:40.529 CC test/nvme/fdp/fdp.o 00:03:40.787 CC test/nvme/cuse/cuse.o 00:03:40.787 LINK fused_ordering 00:03:40.787 LINK doorbell_aers 00:03:40.787 CXX test/cpp_headers/init.o 00:03:40.787 CXX test/cpp_headers/ioat.o 00:03:40.787 CC examples/nvme/abort/abort.o 00:03:40.787 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.787 LINK nvme_compliance 00:03:40.787 CXX test/cpp_headers/ioat_spec.o 00:03:41.045 CXX test/cpp_headers/iscsi_spec.o 00:03:41.045 LINK fdp 00:03:41.045 CXX test/cpp_headers/json.o 00:03:41.045 CXX test/cpp_headers/jsonrpc.o 00:03:41.045 LINK pmr_persistence 00:03:41.045 CXX test/cpp_headers/keyring.o 00:03:41.045 CXX test/cpp_headers/keyring_module.o 00:03:41.045 CXX test/cpp_headers/likely.o 00:03:41.045 CXX test/cpp_headers/log.o 00:03:41.045 CXX test/cpp_headers/lvol.o 00:03:41.045 CXX test/cpp_headers/memory.o 00:03:41.303 LINK abort 00:03:41.303 CXX test/cpp_headers/mmio.o 00:03:41.303 CXX test/cpp_headers/nbd.o 00:03:41.303 CXX test/cpp_headers/notify.o 00:03:41.303 CXX test/cpp_headers/nvme.o 00:03:41.303 CXX test/cpp_headers/nvme_intel.o 00:03:41.303 CXX test/cpp_headers/nvme_ocssd.o 00:03:41.303 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:41.303 CXX test/cpp_headers/nvme_spec.o 00:03:41.303 CXX test/cpp_headers/nvme_zns.o 00:03:41.303 CXX test/cpp_headers/nvmf_cmd.o 00:03:41.561 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:41.561 CXX test/cpp_headers/nvmf.o 00:03:41.561 CXX test/cpp_headers/nvmf_spec.o 00:03:41.561 CXX test/cpp_headers/nvmf_transport.o 00:03:41.561 CXX test/cpp_headers/opal.o 00:03:41.561 CXX test/cpp_headers/opal_spec.o 00:03:41.561 CC examples/nvmf/nvmf/nvmf.o 00:03:41.561 CXX test/cpp_headers/pci_ids.o 00:03:41.561 CXX test/cpp_headers/pipe.o 00:03:41.561 CXX test/cpp_headers/queue.o 00:03:41.819 CXX test/cpp_headers/reduce.o 00:03:41.819 CXX test/cpp_headers/scheduler.o 00:03:41.819 CXX test/cpp_headers/rpc.o 00:03:41.819 CXX test/cpp_headers/scsi.o 00:03:41.819 CXX test/cpp_headers/scsi_spec.o 00:03:41.819 CXX test/cpp_headers/sock.o 00:03:41.819 CXX test/cpp_headers/stdinc.o 00:03:41.819 LINK nvmf 00:03:41.819 CXX test/cpp_headers/string.o 00:03:41.819 CXX test/cpp_headers/thread.o 00:03:41.819 CXX test/cpp_headers/trace.o 00:03:41.819 CXX test/cpp_headers/trace_parser.o 00:03:42.078 CXX test/cpp_headers/tree.o 00:03:42.078 CXX test/cpp_headers/ublk.o 00:03:42.078 CXX test/cpp_headers/util.o 00:03:42.078 LINK cuse 00:03:42.078 CXX test/cpp_headers/uuid.o 00:03:42.078 CXX test/cpp_headers/version.o 00:03:42.078 CXX test/cpp_headers/vfio_user_pci.o 00:03:42.078 CXX test/cpp_headers/vfio_user_spec.o 00:03:42.078 CXX test/cpp_headers/vhost.o 00:03:42.078 CXX test/cpp_headers/vmd.o 00:03:42.078 CXX test/cpp_headers/xor.o 00:03:42.078 CXX test/cpp_headers/zipf.o 00:03:43.470 LINK esnap 00:03:43.729 00:03:43.729 real 1m6.655s 00:03:43.729 user 6m42.486s 00:03:43.729 sys 1m43.056s 00:03:43.729 12:28:16 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:43.729 12:28:16 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.729 ************************************ 00:03:43.729 END TEST make 00:03:43.729 ************************************ 00:03:43.729 12:28:16 -- common/autotest_common.sh@1142 -- $ return 0 00:03:43.729 12:28:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.729 12:28:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.729 12:28:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.729 12:28:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.729 12:28:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.729 12:28:16 -- pm/common@44 -- $ pid=5150 00:03:43.729 12:28:16 -- pm/common@50 -- $ kill -TERM 5150 00:03:43.729 12:28:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.729 12:28:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.729 12:28:16 -- pm/common@44 -- $ pid=5152 00:03:43.729 12:28:16 -- pm/common@50 -- $ kill -TERM 5152 00:03:43.729 12:28:16 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:43.729 12:28:16 -- nvmf/common.sh@7 -- # uname -s 00:03:43.729 12:28:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.729 12:28:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.729 12:28:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.729 12:28:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.729 12:28:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.729 12:28:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.729 12:28:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.729 12:28:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.729 12:28:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.729 12:28:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.729 12:28:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:03:43.729 12:28:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:03:43.729 12:28:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.729 12:28:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.729 12:28:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:43.729 12:28:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.729 12:28:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:43.729 12:28:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.729 12:28:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.729 12:28:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.729 12:28:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.729 12:28:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.729 12:28:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.729 12:28:16 -- paths/export.sh@5 -- # export PATH 00:03:43.729 12:28:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.729 12:28:16 -- nvmf/common.sh@47 -- # : 0 00:03:43.729 12:28:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:43.729 12:28:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:43.729 12:28:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.729 12:28:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.729 12:28:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.729 12:28:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:43.729 12:28:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:43.729 12:28:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:43.729 12:28:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.729 12:28:16 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.729 12:28:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.729 12:28:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.729 12:28:16 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.987 12:28:16 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.987 12:28:16 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.987 12:28:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.987 12:28:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.987 12:28:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.987 12:28:16 -- spdk/autotest.sh@48 -- # udevadm_pid=52801 00:03:43.987 12:28:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.987 12:28:16 -- pm/common@17 -- # local monitor 00:03:43.987 12:28:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.987 12:28:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.987 12:28:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.987 12:28:16 -- pm/common@21 -- # date +%s 00:03:43.987 12:28:16 -- pm/common@25 -- # sleep 1 00:03:43.987 12:28:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721046496 00:03:43.987 12:28:16 -- pm/common@21 -- # date +%s 00:03:43.987 12:28:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721046496 00:03:43.987 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721046496_collect-vmstat.pm.log 00:03:43.987 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721046496_collect-cpu-load.pm.log 00:03:44.922 12:28:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.922 12:28:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.922 12:28:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:44.922 12:28:17 -- common/autotest_common.sh@10 -- # set +x 00:03:44.922 12:28:17 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.922 12:28:17 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:44.922 12:28:17 -- common/autotest_common.sh@10 -- # set +x 00:03:44.922 12:28:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:44.922 12:28:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:44.922 12:28:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:44.922 12:28:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:44.922 12:28:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:44.922 12:28:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.922 12:28:17 -- common/autotest_common.sh@1455 -- # uname 00:03:44.922 12:28:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:44.922 12:28:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.922 12:28:17 -- common/autotest_common.sh@1475 -- # uname 00:03:44.923 12:28:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:44.923 12:28:17 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:44.923 12:28:17 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:44.923 12:28:17 -- spdk/autotest.sh@72 -- # hash lcov 00:03:44.923 12:28:17 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:44.923 12:28:17 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:44.923 --rc lcov_branch_coverage=1 00:03:44.923 --rc lcov_function_coverage=1 00:03:44.923 --rc genhtml_branch_coverage=1 00:03:44.923 --rc genhtml_function_coverage=1 00:03:44.923 --rc genhtml_legend=1 00:03:44.923 --rc geninfo_all_blocks=1 00:03:44.923 ' 00:03:44.923 12:28:17 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:44.923 --rc lcov_branch_coverage=1 00:03:44.923 --rc lcov_function_coverage=1 00:03:44.923 --rc genhtml_branch_coverage=1 00:03:44.923 --rc genhtml_function_coverage=1 00:03:44.923 --rc genhtml_legend=1 00:03:44.923 --rc geninfo_all_blocks=1 00:03:44.923 ' 00:03:44.923 12:28:17 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:44.923 --rc lcov_branch_coverage=1 00:03:44.923 --rc lcov_function_coverage=1 00:03:44.923 --rc genhtml_branch_coverage=1 00:03:44.923 --rc genhtml_function_coverage=1 00:03:44.923 --rc genhtml_legend=1 00:03:44.923 --rc geninfo_all_blocks=1 00:03:44.923 --no-external' 00:03:44.923 12:28:17 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:44.923 --rc lcov_branch_coverage=1 00:03:44.923 --rc lcov_function_coverage=1 00:03:44.923 --rc genhtml_branch_coverage=1 00:03:44.923 --rc genhtml_function_coverage=1 00:03:44.923 --rc genhtml_legend=1 00:03:44.923 --rc geninfo_all_blocks=1 00:03:44.923 --no-external' 00:03:44.923 12:28:17 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:45.181 lcov: LCOV version 1.14 00:03:45.181 12:28:17 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:00.082 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:00.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:14.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:14.960 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:14.961 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:14.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:18.249 12:28:50 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:18.249 12:28:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.249 12:28:50 -- common/autotest_common.sh@10 -- # set +x 00:04:18.249 12:28:50 -- spdk/autotest.sh@91 -- # rm -f 00:04:18.249 12:28:50 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.507 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:18.507 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:18.507 12:28:51 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:18.507 12:28:51 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:18.507 12:28:51 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:18.507 12:28:51 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:18.507 12:28:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.507 12:28:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:18.507 12:28:51 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:18.507 12:28:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.507 12:28:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.507 12:28:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.507 12:28:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:18.507 12:28:51 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:18.507 12:28:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:18.507 12:28:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.507 12:28:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.507 12:28:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:18.507 12:28:51 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:18.507 12:28:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:18.507 12:28:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.507 12:28:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.507 12:28:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:18.507 12:28:51 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:18.507 12:28:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:18.507 12:28:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.507 12:28:51 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:18.507 12:28:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.507 12:28:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.507 12:28:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:18.507 12:28:51 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:18.507 12:28:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:18.507 No valid GPT data, bailing 00:04:18.507 12:28:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.507 12:28:51 -- scripts/common.sh@391 -- # pt= 00:04:18.507 12:28:51 -- scripts/common.sh@392 -- # return 1 00:04:18.507 12:28:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:18.507 1+0 records in 00:04:18.507 1+0 records out 00:04:18.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360055 s, 291 MB/s 00:04:18.507 12:28:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.507 12:28:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.507 12:28:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:18.507 12:28:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:18.507 12:28:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:18.764 No valid GPT data, bailing 00:04:18.764 12:28:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:18.764 12:28:51 -- scripts/common.sh@391 -- # pt= 00:04:18.764 12:28:51 -- scripts/common.sh@392 -- # return 1 00:04:18.764 12:28:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:18.764 1+0 records in 00:04:18.764 1+0 records out 00:04:18.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00371492 s, 282 MB/s 00:04:18.764 12:28:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.764 12:28:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.764 12:28:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:18.764 12:28:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:18.764 12:28:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:18.764 No valid GPT data, bailing 00:04:18.764 12:28:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:18.764 12:28:51 -- scripts/common.sh@391 -- # pt= 00:04:18.764 12:28:51 -- scripts/common.sh@392 -- # return 1 00:04:18.764 12:28:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:18.764 1+0 records in 00:04:18.764 1+0 records out 00:04:18.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00351589 s, 298 MB/s 00:04:18.764 12:28:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.764 12:28:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.764 12:28:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:18.764 12:28:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:18.764 12:28:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:18.764 No valid GPT data, bailing 00:04:18.764 12:28:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:18.764 12:28:51 -- scripts/common.sh@391 -- # pt= 00:04:18.764 12:28:51 -- scripts/common.sh@392 -- # return 1 00:04:18.764 12:28:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:18.764 1+0 records in 00:04:18.764 1+0 records out 00:04:18.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00321892 s, 326 MB/s 00:04:18.764 12:28:51 -- spdk/autotest.sh@118 -- # sync 00:04:18.764 12:28:51 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:18.764 12:28:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:18.764 12:28:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:20.664 12:28:53 -- spdk/autotest.sh@124 -- # uname -s 00:04:20.664 12:28:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:20.664 12:28:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:20.664 12:28:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.664 12:28:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.664 12:28:53 -- common/autotest_common.sh@10 -- # set +x 00:04:20.664 ************************************ 00:04:20.664 START TEST setup.sh 00:04:20.664 ************************************ 00:04:20.664 12:28:53 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:20.664 * Looking for test storage... 00:04:20.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.664 12:28:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:20.664 12:28:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:20.664 12:28:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:20.664 12:28:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.664 12:28:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.664 12:28:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.664 ************************************ 00:04:20.664 START TEST acl 00:04:20.664 ************************************ 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:20.664 * Looking for test storage... 00:04:20.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.664 12:28:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:20.664 12:28:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.664 12:28:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:20.664 12:28:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:20.664 12:28:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:20.664 12:28:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:20.664 12:28:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:20.922 12:28:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.922 12:28:53 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.489 12:28:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:21.489 12:28:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:21.489 12:28:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.489 12:28:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:21.489 12:28:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.489 12:28:54 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:22.056 12:28:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:22.056 12:28:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.056 12:28:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.056 Hugepages 00:04:22.056 node hugesize free / total 00:04:22.056 12:28:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:22.056 12:28:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.056 12:28:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.056 00:04:22.056 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.056 12:28:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:22.056 12:28:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.056 12:28:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:22.316 12:28:54 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:22.316 12:28:54 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.316 12:28:54 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.316 12:28:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:22.316 ************************************ 00:04:22.316 START TEST denied 00:04:22.316 ************************************ 00:04:22.316 12:28:54 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:22.316 12:28:54 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:22.316 12:28:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:22.316 12:28:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:22.316 12:28:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.316 12:28:54 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.255 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.255 12:28:55 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.824 00:04:23.824 real 0m1.473s 00:04:23.824 user 0m0.582s 00:04:23.824 sys 0m0.838s 00:04:23.824 ************************************ 00:04:23.824 END TEST denied 00:04:23.824 12:28:56 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.824 12:28:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:23.824 ************************************ 00:04:23.824 12:28:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:23.824 12:28:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:23.824 12:28:56 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.824 12:28:56 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.824 12:28:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:23.824 ************************************ 00:04:23.824 START TEST allowed 00:04:23.824 ************************************ 00:04:23.824 12:28:56 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:23.824 12:28:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:23.824 12:28:56 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:23.824 12:28:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:23.824 12:28:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.824 12:28:56 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.759 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.759 12:28:57 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.332 00:04:25.332 real 0m1.549s 00:04:25.332 user 0m0.703s 00:04:25.332 sys 0m0.849s 00:04:25.332 12:28:58 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.332 12:28:58 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:25.332 ************************************ 00:04:25.332 END TEST allowed 00:04:25.332 ************************************ 00:04:25.593 12:28:58 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:25.593 00:04:25.593 real 0m4.799s 00:04:25.593 user 0m2.108s 00:04:25.593 sys 0m2.638s 00:04:25.593 12:28:58 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.593 12:28:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.593 ************************************ 00:04:25.593 END TEST acl 00:04:25.593 ************************************ 00:04:25.593 12:28:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:25.593 12:28:58 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:25.593 12:28:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.593 12:28:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.593 12:28:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.593 ************************************ 00:04:25.593 START TEST hugepages 00:04:25.593 ************************************ 00:04:25.593 12:28:58 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:25.593 * Looking for test storage... 00:04:25.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 6018660 kB' 'MemAvailable: 7413508 kB' 'Buffers: 2436 kB' 'Cached: 1609320 kB' 'SwapCached: 0 kB' 'Active: 435972 kB' 'Inactive: 1280408 kB' 'Active(anon): 115112 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280408 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 106244 kB' 'Mapped: 48516 kB' 'Shmem: 10488 kB' 'KReclaimable: 61488 kB' 'Slab: 133044 kB' 'SReclaimable: 61488 kB' 'SUnreclaim: 71556 kB' 'KernelStack: 6348 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412428 kB' 'Committed_AS: 336528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 12:28:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.595 12:28:58 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:25.595 12:28:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.595 12:28:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.595 12:28:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.595 ************************************ 00:04:25.595 START TEST default_setup 00:04:25.595 ************************************ 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.595 12:28:58 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.533 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.533 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8099424 kB' 'MemAvailable: 9494140 kB' 'Buffers: 2436 kB' 'Cached: 1609308 kB' 'SwapCached: 0 kB' 'Active: 452716 kB' 'Inactive: 1280412 kB' 'Active(anon): 131856 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123000 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132716 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71504 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.533 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.534 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8099424 kB' 'MemAvailable: 9494140 kB' 'Buffers: 2436 kB' 'Cached: 1609308 kB' 'SwapCached: 0 kB' 'Active: 452372 kB' 'Inactive: 1280412 kB' 'Active(anon): 131512 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122648 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132668 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71456 kB' 'KernelStack: 6288 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.535 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8099872 kB' 'MemAvailable: 9494588 kB' 'Buffers: 2436 kB' 'Cached: 1609308 kB' 'SwapCached: 0 kB' 'Active: 452612 kB' 'Inactive: 1280412 kB' 'Active(anon): 131752 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122892 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132656 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71444 kB' 'KernelStack: 6288 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.536 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.537 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:26.538 nr_hugepages=1024 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.538 resv_hugepages=0 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.538 surplus_hugepages=0 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.538 anon_hugepages=0 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8100388 kB' 'MemAvailable: 9495104 kB' 'Buffers: 2436 kB' 'Cached: 1609308 kB' 'SwapCached: 0 kB' 'Active: 452324 kB' 'Inactive: 1280412 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122604 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132648 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71436 kB' 'KernelStack: 6272 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.538 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.539 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8100332 kB' 'MemUsed: 4141624 kB' 'SwapCached: 0 kB' 'Active: 452356 kB' 'Inactive: 1280412 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1611744 kB' 'Mapped: 48520 kB' 'AnonPages: 122636 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132640 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.831 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.832 node0=1024 expecting 1024 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.832 00:04:26.832 real 0m1.012s 00:04:26.832 user 0m0.484s 00:04:26.832 sys 0m0.475s 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.832 12:28:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:26.832 ************************************ 00:04:26.832 END TEST default_setup 00:04:26.832 ************************************ 00:04:26.832 12:28:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:26.832 12:28:59 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:26.832 12:28:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.832 12:28:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.832 12:28:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.832 ************************************ 00:04:26.832 START TEST per_node_1G_alloc 00:04:26.832 ************************************ 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.832 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.092 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.093 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9144660 kB' 'MemAvailable: 10539384 kB' 'Buffers: 2436 kB' 'Cached: 1609308 kB' 'SwapCached: 0 kB' 'Active: 452860 kB' 'Inactive: 1280420 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280420 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122848 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132624 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71412 kB' 'KernelStack: 6260 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9144948 kB' 'MemAvailable: 10539672 kB' 'Buffers: 2436 kB' 'Cached: 1609308 kB' 'SwapCached: 0 kB' 'Active: 452472 kB' 'Inactive: 1280420 kB' 'Active(anon): 131612 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280420 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122752 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132644 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71432 kB' 'KernelStack: 6304 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9144948 kB' 'MemAvailable: 10539672 kB' 'Buffers: 2436 kB' 'Cached: 1609308 kB' 'SwapCached: 0 kB' 'Active: 452476 kB' 'Inactive: 1280420 kB' 'Active(anon): 131616 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280420 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122760 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132640 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71428 kB' 'KernelStack: 6304 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.358 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.358 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.358 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.358 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.359 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.360 nr_hugepages=512 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:27.360 resv_hugepages=0 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.360 surplus_hugepages=0 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.360 anon_hugepages=0 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9144948 kB' 'MemAvailable: 10539672 kB' 'Buffers: 2436 kB' 'Cached: 1609308 kB' 'SwapCached: 0 kB' 'Active: 452312 kB' 'Inactive: 1280420 kB' 'Active(anon): 131452 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280420 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122900 kB' 'Mapped: 48780 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132640 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71428 kB' 'KernelStack: 6320 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 353160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.360 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.361 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9145304 kB' 'MemUsed: 3096652 kB' 'SwapCached: 0 kB' 'Active: 452160 kB' 'Inactive: 1280424 kB' 'Active(anon): 131300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1611748 kB' 'Mapped: 48580 kB' 'AnonPages: 122448 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132620 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.362 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.363 node0=512 expecting 512 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:27.363 00:04:27.363 real 0m0.551s 00:04:27.363 user 0m0.268s 00:04:27.363 sys 0m0.316s 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.363 12:28:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.363 ************************************ 00:04:27.363 END TEST per_node_1G_alloc 00:04:27.363 ************************************ 00:04:27.363 12:28:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.363 12:28:59 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:27.363 12:28:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.363 12:28:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.363 12:28:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.363 ************************************ 00:04:27.363 START TEST even_2G_alloc 00:04:27.363 ************************************ 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.363 12:28:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.622 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.622 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.622 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.886 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8100780 kB' 'MemAvailable: 9495508 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452628 kB' 'Inactive: 1280424 kB' 'Active(anon): 131768 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132644 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71432 kB' 'KernelStack: 6244 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.887 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8100780 kB' 'MemAvailable: 9495508 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452336 kB' 'Inactive: 1280424 kB' 'Active(anon): 131476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122856 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132652 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71440 kB' 'KernelStack: 6272 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.888 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.889 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8101040 kB' 'MemAvailable: 9495768 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452424 kB' 'Inactive: 1280424 kB' 'Active(anon): 131564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122676 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132652 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71440 kB' 'KernelStack: 6256 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.890 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.891 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.892 nr_hugepages=1024 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:27.892 resv_hugepages=0 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.892 surplus_hugepages=0 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.892 anon_hugepages=0 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8101732 kB' 'MemAvailable: 9496460 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452260 kB' 'Inactive: 1280424 kB' 'Active(anon): 131400 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122760 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132640 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71428 kB' 'KernelStack: 6292 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.892 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.893 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8101732 kB' 'MemUsed: 4140224 kB' 'SwapCached: 0 kB' 'Active: 452212 kB' 'Inactive: 1280424 kB' 'Active(anon): 131352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1611748 kB' 'Mapped: 48520 kB' 'AnonPages: 122480 kB' 'Shmem: 10464 kB' 'KernelStack: 6292 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132636 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.894 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.895 node0=1024 expecting 1024 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.895 00:04:27.895 real 0m0.535s 00:04:27.895 user 0m0.262s 00:04:27.895 sys 0m0.310s 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.895 12:29:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.895 ************************************ 00:04:27.895 END TEST even_2G_alloc 00:04:27.895 ************************************ 00:04:27.895 12:29:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.895 12:29:00 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:27.895 12:29:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.895 12:29:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.895 12:29:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.895 ************************************ 00:04:27.895 START TEST odd_alloc 00:04:27.895 ************************************ 00:04:27.895 12:29:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:27.895 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:27.895 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:27.895 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.895 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.895 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:27.895 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.896 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.420 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.420 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8103148 kB' 'MemAvailable: 9497876 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452888 kB' 'Inactive: 1280424 kB' 'Active(anon): 132028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123520 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132624 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71412 kB' 'KernelStack: 6356 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459980 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8103520 kB' 'MemAvailable: 9498248 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452464 kB' 'Inactive: 1280424 kB' 'Active(anon): 131604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122768 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132656 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71444 kB' 'KernelStack: 6288 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459980 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.421 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.422 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8105576 kB' 'MemAvailable: 9500304 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452224 kB' 'Inactive: 1280424 kB' 'Active(anon): 131364 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122540 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132648 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71436 kB' 'KernelStack: 6304 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459980 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.423 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.424 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.425 nr_hugepages=1025 00:04:28.425 resv_hugepages=0 00:04:28.425 surplus_hugepages=0 00:04:28.425 anon_hugepages=0 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.425 12:29:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8106060 kB' 'MemAvailable: 9500788 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452188 kB' 'Inactive: 1280424 kB' 'Active(anon): 131328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122800 kB' 'Mapped: 48520 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132644 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71432 kB' 'KernelStack: 6304 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459980 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.425 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.426 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8106060 kB' 'MemUsed: 4135896 kB' 'SwapCached: 0 kB' 'Active: 452240 kB' 'Inactive: 1280424 kB' 'Active(anon): 131380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1611748 kB' 'Mapped: 48520 kB' 'AnonPages: 122792 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132644 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.427 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:28.428 node0=1025 expecting 1025 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:28.428 00:04:28.428 real 0m0.581s 00:04:28.428 user 0m0.273s 00:04:28.428 sys 0m0.288s 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.428 12:29:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:28.428 ************************************ 00:04:28.428 END TEST odd_alloc 00:04:28.428 ************************************ 00:04:28.688 12:29:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:28.688 12:29:01 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:28.688 12:29:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.688 12:29:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.688 12:29:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.688 ************************************ 00:04:28.688 START TEST custom_alloc 00:04:28.688 ************************************ 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.688 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.949 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.949 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.949 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9159484 kB' 'MemAvailable: 10554212 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452852 kB' 'Inactive: 1280424 kB' 'Active(anon): 131992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132616 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71404 kB' 'KernelStack: 6244 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.950 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9159484 kB' 'MemAvailable: 10554212 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452256 kB' 'Inactive: 1280424 kB' 'Active(anon): 131396 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132620 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71408 kB' 'KernelStack: 6304 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.951 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.952 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9159484 kB' 'MemAvailable: 10554212 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452208 kB' 'Inactive: 1280424 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122712 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132620 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71408 kB' 'KernelStack: 6288 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.953 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.954 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.215 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.216 nr_hugepages=512 00:04:29.216 resv_hugepages=0 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.216 surplus_hugepages=0 00:04:29.216 anon_hugepages=0 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9159484 kB' 'MemAvailable: 10554212 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452192 kB' 'Inactive: 1280424 kB' 'Active(anon): 131332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122700 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132612 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71400 kB' 'KernelStack: 6288 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 9159484 kB' 'MemUsed: 3082472 kB' 'SwapCached: 0 kB' 'Active: 452236 kB' 'Inactive: 1280424 kB' 'Active(anon): 131376 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1611748 kB' 'Mapped: 48524 kB' 'AnonPages: 122792 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132612 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.219 node0=512 expecting 512 00:04:29.219 ************************************ 00:04:29.219 END TEST custom_alloc 00:04:29.219 ************************************ 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:29.219 00:04:29.219 real 0m0.592s 00:04:29.219 user 0m0.287s 00:04:29.219 sys 0m0.308s 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.219 12:29:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.219 12:29:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.219 12:29:01 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:29.219 12:29:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.219 12:29:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.219 12:29:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.219 ************************************ 00:04:29.219 START TEST no_shrink_alloc 00:04:29.219 ************************************ 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.219 12:29:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.478 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.478 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8118620 kB' 'MemAvailable: 9513348 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 1280424 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132636 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71424 kB' 'KernelStack: 6276 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.742 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.743 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8118620 kB' 'MemAvailable: 9513348 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452752 kB' 'Inactive: 1280424 kB' 'Active(anon): 131892 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123000 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132632 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71420 kB' 'KernelStack: 6296 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.744 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8118620 kB' 'MemAvailable: 9513348 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452716 kB' 'Inactive: 1280424 kB' 'Active(anon): 131856 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122992 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132632 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71420 kB' 'KernelStack: 6312 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.745 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.746 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.747 nr_hugepages=1024 00:04:29.747 resv_hugepages=0 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.747 surplus_hugepages=0 00:04:29.747 anon_hugepages=0 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8118872 kB' 'MemAvailable: 9513600 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452228 kB' 'Inactive: 1280424 kB' 'Active(anon): 131368 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122736 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132632 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71420 kB' 'KernelStack: 6288 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.747 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.748 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8118872 kB' 'MemUsed: 4123084 kB' 'SwapCached: 0 kB' 'Active: 452296 kB' 'Inactive: 1280424 kB' 'Active(anon): 131436 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1611748 kB' 'Mapped: 48524 kB' 'AnonPages: 122844 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132632 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.749 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.750 node0=1024 expecting 1024 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.750 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.272 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.272 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.272 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8114084 kB' 'MemAvailable: 9508812 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 453256 kB' 'Inactive: 1280424 kB' 'Active(anon): 132396 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123528 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132640 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71428 kB' 'KernelStack: 6372 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.272 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8113836 kB' 'MemAvailable: 9508564 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452700 kB' 'Inactive: 1280424 kB' 'Active(anon): 131840 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132624 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71412 kB' 'KernelStack: 6304 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.273 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.274 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8113836 kB' 'MemAvailable: 9508564 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452628 kB' 'Inactive: 1280424 kB' 'Active(anon): 131768 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122872 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132624 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71412 kB' 'KernelStack: 6288 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.275 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.276 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.277 nr_hugepages=1024 00:04:30.277 resv_hugepages=0 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.277 surplus_hugepages=0 00:04:30.277 anon_hugepages=0 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8114468 kB' 'MemAvailable: 9509196 kB' 'Buffers: 2436 kB' 'Cached: 1609312 kB' 'SwapCached: 0 kB' 'Active: 452228 kB' 'Inactive: 1280424 kB' 'Active(anon): 131368 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122736 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132624 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71412 kB' 'KernelStack: 6288 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 353528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.277 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.278 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241956 kB' 'MemFree: 8114468 kB' 'MemUsed: 4127488 kB' 'SwapCached: 0 kB' 'Active: 452488 kB' 'Inactive: 1280424 kB' 'Active(anon): 131628 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280424 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1611748 kB' 'Mapped: 48524 kB' 'AnonPages: 122736 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132620 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.279 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.280 node0=1024 expecting 1024 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.280 12:29:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.280 00:04:30.280 real 0m1.120s 00:04:30.280 user 0m0.518s 00:04:30.280 sys 0m0.615s 00:04:30.280 ************************************ 00:04:30.281 END TEST no_shrink_alloc 00:04:30.281 ************************************ 00:04:30.281 12:29:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.281 12:29:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.281 12:29:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:30.281 12:29:02 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:30.281 12:29:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:30.281 12:29:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.281 12:29:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.281 12:29:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.281 12:29:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.281 12:29:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.281 12:29:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:30.281 12:29:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:30.281 00:04:30.281 real 0m4.843s 00:04:30.281 user 0m2.258s 00:04:30.281 sys 0m2.579s 00:04:30.281 12:29:02 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.281 12:29:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.281 ************************************ 00:04:30.281 END TEST hugepages 00:04:30.281 ************************************ 00:04:30.538 12:29:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:30.538 12:29:02 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:30.538 12:29:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.538 12:29:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.538 12:29:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.538 ************************************ 00:04:30.538 START TEST driver 00:04:30.538 ************************************ 00:04:30.538 12:29:02 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:30.538 * Looking for test storage... 00:04:30.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:30.539 12:29:03 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:30.539 12:29:03 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.539 12:29:03 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.105 12:29:03 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:31.105 12:29:03 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.105 12:29:03 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.105 12:29:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:31.105 ************************************ 00:04:31.105 START TEST guess_driver 00:04:31.105 ************************************ 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:31.105 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:31.105 Looking for driver=uio_pci_generic 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.105 12:29:03 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.671 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:31.671 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:31.671 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.930 12:29:04 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:32.496 00:04:32.496 real 0m1.391s 00:04:32.496 user 0m0.521s 00:04:32.496 sys 0m0.878s 00:04:32.496 12:29:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.496 12:29:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.496 ************************************ 00:04:32.496 END TEST guess_driver 00:04:32.496 ************************************ 00:04:32.496 12:29:05 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:32.496 00:04:32.496 real 0m2.066s 00:04:32.496 user 0m0.752s 00:04:32.496 sys 0m1.367s 00:04:32.496 12:29:05 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.496 12:29:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.496 ************************************ 00:04:32.496 END TEST driver 00:04:32.496 ************************************ 00:04:32.496 12:29:05 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:32.496 12:29:05 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:32.496 12:29:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.496 12:29:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.496 12:29:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.496 ************************************ 00:04:32.496 START TEST devices 00:04:32.496 ************************************ 00:04:32.496 12:29:05 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:32.755 * Looking for test storage... 00:04:32.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:32.755 12:29:05 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:32.755 12:29:05 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:32.755 12:29:05 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.755 12:29:05 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:33.321 12:29:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:33.321 12:29:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:33.321 12:29:05 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:33.321 No valid GPT data, bailing 00:04:33.321 12:29:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.321 12:29:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.321 12:29:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.321 12:29:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:33.322 12:29:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:33.322 12:29:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:33.322 12:29:05 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:33.322 12:29:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:33.322 12:29:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.322 12:29:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:33.322 12:29:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.322 12:29:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:33.322 12:29:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.322 12:29:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:33.322 12:29:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:33.322 12:29:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:33.322 12:29:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:33.322 12:29:05 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:33.580 No valid GPT data, bailing 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:33.580 12:29:06 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:33.580 12:29:06 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:33.580 12:29:06 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:33.580 No valid GPT data, bailing 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:33.580 12:29:06 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:33.580 12:29:06 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:33.580 12:29:06 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:33.580 No valid GPT data, bailing 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.580 12:29:06 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:33.580 12:29:06 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:33.580 12:29:06 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:33.580 12:29:06 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:33.580 12:29:06 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:33.580 12:29:06 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.580 12:29:06 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.580 12:29:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.580 ************************************ 00:04:33.580 START TEST nvme_mount 00:04:33.580 ************************************ 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.580 12:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:34.953 Creating new GPT entries in memory. 00:04:34.953 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.953 other utilities. 00:04:34.953 12:29:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.953 12:29:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.953 12:29:07 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.953 12:29:07 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.953 12:29:07 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:35.888 Creating new GPT entries in memory. 00:04:35.888 The operation has completed successfully. 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57014 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.888 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.146 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.146 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.146 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.146 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:36.405 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.405 12:29:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.663 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.663 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.663 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:36.663 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:36.663 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.664 12:29:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.922 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.180 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:37.181 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.181 12:29:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.181 12:29:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.439 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.439 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.439 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.439 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.439 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.439 12:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.439 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.439 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.698 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.698 00:04:37.698 real 0m4.002s 00:04:37.698 user 0m0.683s 00:04:37.698 sys 0m1.072s 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.698 12:29:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.698 ************************************ 00:04:37.698 END TEST nvme_mount 00:04:37.698 ************************************ 00:04:37.698 12:29:10 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:37.698 12:29:10 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:37.698 12:29:10 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.698 12:29:10 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.698 12:29:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.698 ************************************ 00:04:37.698 START TEST dm_mount 00:04:37.698 ************************************ 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.698 12:29:10 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:39.078 Creating new GPT entries in memory. 00:04:39.078 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.078 other utilities. 00:04:39.078 12:29:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.078 12:29:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.078 12:29:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.078 12:29:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.078 12:29:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:40.012 Creating new GPT entries in memory. 00:04:40.012 The operation has completed successfully. 00:04:40.012 12:29:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.012 12:29:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.012 12:29:12 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.012 12:29:12 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.012 12:29:12 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:40.947 The operation has completed successfully. 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57448 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:40.947 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.948 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:40.948 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:40.948 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.948 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.948 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:40.948 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.948 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.948 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.206 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.206 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:41.206 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.206 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.206 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.206 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.206 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.206 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.465 12:29:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.724 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.724 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:41.724 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.724 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.724 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.724 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.724 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.724 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:41.983 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:41.983 00:04:41.983 real 0m4.222s 00:04:41.983 user 0m0.498s 00:04:41.983 sys 0m0.693s 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.983 12:29:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:41.983 ************************************ 00:04:41.983 END TEST dm_mount 00:04:41.983 ************************************ 00:04:41.983 12:29:14 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:41.983 12:29:14 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:41.983 12:29:14 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:41.983 12:29:14 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.983 12:29:14 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.983 12:29:14 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:41.983 12:29:14 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.983 12:29:14 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.242 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:42.242 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:42.242 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:42.242 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:42.242 12:29:14 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:42.242 12:29:14 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.242 12:29:14 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.242 12:29:14 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.242 12:29:14 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.242 12:29:14 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.242 12:29:14 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:42.242 00:04:42.242 real 0m9.761s 00:04:42.242 user 0m1.812s 00:04:42.242 sys 0m2.393s 00:04:42.242 12:29:14 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.242 ************************************ 00:04:42.242 END TEST devices 00:04:42.242 ************************************ 00:04:42.242 12:29:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.242 12:29:14 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:42.242 00:04:42.242 real 0m21.741s 00:04:42.242 user 0m7.009s 00:04:42.242 sys 0m9.162s 00:04:42.242 12:29:14 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.242 12:29:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.242 ************************************ 00:04:42.242 END TEST setup.sh 00:04:42.242 ************************************ 00:04:42.501 12:29:14 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.501 12:29:14 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:43.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.068 Hugepages 00:04:43.068 node hugesize free / total 00:04:43.068 node0 1048576kB 0 / 0 00:04:43.068 node0 2048kB 2048 / 2048 00:04:43.068 00:04:43.068 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.068 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:43.068 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:43.325 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:43.325 12:29:15 -- spdk/autotest.sh@130 -- # uname -s 00:04:43.325 12:29:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:43.325 12:29:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:43.325 12:29:15 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.888 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:44.144 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:44.144 12:29:16 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:45.098 12:29:17 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:45.098 12:29:17 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:45.098 12:29:17 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.098 12:29:17 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:45.098 12:29:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:45.098 12:29:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:45.098 12:29:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.098 12:29:17 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:45.098 12:29:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:45.098 12:29:17 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:45.098 12:29:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:45.098 12:29:17 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.612 Waiting for block devices as requested 00:04:45.612 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.612 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.612 12:29:18 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:45.612 12:29:18 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:45.612 12:29:18 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.612 12:29:18 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:45.612 12:29:18 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.612 12:29:18 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:45.612 12:29:18 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.612 12:29:18 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:45.612 12:29:18 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:45.612 12:29:18 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:45.612 12:29:18 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:45.612 12:29:18 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:45.612 12:29:18 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:45.612 12:29:18 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:45.612 12:29:18 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:45.612 12:29:18 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:45.612 12:29:18 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:45.612 12:29:18 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:45.612 12:29:18 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:45.612 12:29:18 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:45.612 12:29:18 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:45.612 12:29:18 -- common/autotest_common.sh@1557 -- # continue 00:04:45.612 12:29:18 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:45.612 12:29:18 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:45.612 12:29:18 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.612 12:29:18 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:45.612 12:29:18 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.612 12:29:18 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:45.612 12:29:18 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.612 12:29:18 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:45.612 12:29:18 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:45.612 12:29:18 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:45.612 12:29:18 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:45.612 12:29:18 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:45.612 12:29:18 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:45.612 12:29:18 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:45.612 12:29:18 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:45.612 12:29:18 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:45.869 12:29:18 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:45.869 12:29:18 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:45.869 12:29:18 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:45.869 12:29:18 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:45.869 12:29:18 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:45.869 12:29:18 -- common/autotest_common.sh@1557 -- # continue 00:04:45.869 12:29:18 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:45.869 12:29:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.869 12:29:18 -- common/autotest_common.sh@10 -- # set +x 00:04:45.869 12:29:18 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:45.869 12:29:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.869 12:29:18 -- common/autotest_common.sh@10 -- # set +x 00:04:45.869 12:29:18 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.694 12:29:19 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:46.694 12:29:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.694 12:29:19 -- common/autotest_common.sh@10 -- # set +x 00:04:46.694 12:29:19 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:46.694 12:29:19 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:46.694 12:29:19 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:46.694 12:29:19 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:46.694 12:29:19 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:46.694 12:29:19 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:46.694 12:29:19 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:46.694 12:29:19 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:46.694 12:29:19 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.694 12:29:19 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.694 12:29:19 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:46.694 12:29:19 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:46.694 12:29:19 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:46.694 12:29:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:46.694 12:29:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:46.694 12:29:19 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:46.694 12:29:19 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.694 12:29:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:46.694 12:29:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:46.694 12:29:19 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:46.694 12:29:19 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.694 12:29:19 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:46.694 12:29:19 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:46.694 12:29:19 -- common/autotest_common.sh@1593 -- # return 0 00:04:46.694 12:29:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:46.694 12:29:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:46.694 12:29:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:46.694 12:29:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:46.694 12:29:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:46.694 12:29:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.694 12:29:19 -- common/autotest_common.sh@10 -- # set +x 00:04:46.694 12:29:19 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:46.694 12:29:19 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:46.694 12:29:19 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:46.694 12:29:19 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.694 12:29:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.694 12:29:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.694 12:29:19 -- common/autotest_common.sh@10 -- # set +x 00:04:46.694 ************************************ 00:04:46.694 START TEST env 00:04:46.694 ************************************ 00:04:46.694 12:29:19 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.953 * Looking for test storage... 00:04:46.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:46.953 12:29:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.953 12:29:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.953 12:29:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.953 12:29:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.953 ************************************ 00:04:46.953 START TEST env_memory 00:04:46.953 ************************************ 00:04:46.953 12:29:19 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.953 00:04:46.953 00:04:46.953 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.953 http://cunit.sourceforge.net/ 00:04:46.953 00:04:46.953 00:04:46.953 Suite: memory 00:04:46.953 Test: alloc and free memory map ...[2024-07-15 12:29:19.474513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.953 passed 00:04:46.953 Test: mem map translation ...[2024-07-15 12:29:19.505712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.953 [2024-07-15 12:29:19.505797] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.953 [2024-07-15 12:29:19.505876] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.953 [2024-07-15 12:29:19.505895] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.953 passed 00:04:46.953 Test: mem map registration ...[2024-07-15 12:29:19.569889] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:46.953 [2024-07-15 12:29:19.569964] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:46.953 passed 00:04:47.212 Test: mem map adjacent registrations ...passed 00:04:47.212 00:04:47.212 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.212 suites 1 1 n/a 0 0 00:04:47.212 tests 4 4 4 0 0 00:04:47.212 asserts 152 152 152 0 n/a 00:04:47.212 00:04:47.212 Elapsed time = 0.214 seconds 00:04:47.212 00:04:47.213 real 0m0.230s 00:04:47.213 user 0m0.215s 00:04:47.213 sys 0m0.012s 00:04:47.213 12:29:19 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.213 12:29:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:47.213 ************************************ 00:04:47.213 END TEST env_memory 00:04:47.213 ************************************ 00:04:47.213 12:29:19 env -- common/autotest_common.sh@1142 -- # return 0 00:04:47.213 12:29:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:47.213 12:29:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.213 12:29:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.213 12:29:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.213 ************************************ 00:04:47.213 START TEST env_vtophys 00:04:47.213 ************************************ 00:04:47.213 12:29:19 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:47.213 EAL: lib.eal log level changed from notice to debug 00:04:47.213 EAL: Detected lcore 0 as core 0 on socket 0 00:04:47.213 EAL: Detected lcore 1 as core 0 on socket 0 00:04:47.213 EAL: Detected lcore 2 as core 0 on socket 0 00:04:47.213 EAL: Detected lcore 3 as core 0 on socket 0 00:04:47.213 EAL: Detected lcore 4 as core 0 on socket 0 00:04:47.213 EAL: Detected lcore 5 as core 0 on socket 0 00:04:47.213 EAL: Detected lcore 6 as core 0 on socket 0 00:04:47.213 EAL: Detected lcore 7 as core 0 on socket 0 00:04:47.213 EAL: Detected lcore 8 as core 0 on socket 0 00:04:47.213 EAL: Detected lcore 9 as core 0 on socket 0 00:04:47.213 EAL: Maximum logical cores by configuration: 128 00:04:47.213 EAL: Detected CPU lcores: 10 00:04:47.213 EAL: Detected NUMA nodes: 1 00:04:47.213 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:47.213 EAL: Detected shared linkage of DPDK 00:04:47.213 EAL: No shared files mode enabled, IPC will be disabled 00:04:47.213 EAL: Selected IOVA mode 'PA' 00:04:47.213 EAL: Probing VFIO support... 00:04:47.213 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.213 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:47.213 EAL: Ask a virtual area of 0x2e000 bytes 00:04:47.213 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:47.213 EAL: Setting up physically contiguous memory... 00:04:47.213 EAL: Setting maximum number of open files to 524288 00:04:47.213 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:47.213 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:47.213 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.213 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:47.213 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.213 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.213 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:47.213 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:47.213 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.213 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:47.213 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.213 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.213 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:47.213 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:47.213 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.213 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:47.213 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.213 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.213 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:47.213 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:47.213 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.213 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:47.213 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.213 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.213 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:47.213 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:47.213 EAL: Hugepages will be freed exactly as allocated. 00:04:47.213 EAL: No shared files mode enabled, IPC is disabled 00:04:47.213 EAL: No shared files mode enabled, IPC is disabled 00:04:47.213 EAL: TSC frequency is ~2200000 KHz 00:04:47.213 EAL: Main lcore 0 is ready (tid=7fde08228a00;cpuset=[0]) 00:04:47.213 EAL: Trying to obtain current memory policy. 00:04:47.213 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.213 EAL: Restoring previous memory policy: 0 00:04:47.213 EAL: request: mp_malloc_sync 00:04:47.213 EAL: No shared files mode enabled, IPC is disabled 00:04:47.213 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.213 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.213 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:47.213 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.213 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:47.213 00:04:47.213 00:04:47.213 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.213 http://cunit.sourceforge.net/ 00:04:47.213 00:04:47.213 00:04:47.213 Suite: components_suite 00:04:47.213 Test: vtophys_malloc_test ...passed 00:04:47.213 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:47.213 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.213 EAL: Restoring previous memory policy: 4 00:04:47.213 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.213 EAL: request: mp_malloc_sync 00:04:47.213 EAL: No shared files mode enabled, IPC is disabled 00:04:47.213 EAL: Heap on socket 0 was expanded by 4MB 00:04:47.213 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.213 EAL: request: mp_malloc_sync 00:04:47.213 EAL: No shared files mode enabled, IPC is disabled 00:04:47.213 EAL: Heap on socket 0 was shrunk by 4MB 00:04:47.213 EAL: Trying to obtain current memory policy. 00:04:47.213 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.213 EAL: Restoring previous memory policy: 4 00:04:47.213 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.213 EAL: request: mp_malloc_sync 00:04:47.213 EAL: No shared files mode enabled, IPC is disabled 00:04:47.213 EAL: Heap on socket 0 was expanded by 6MB 00:04:47.213 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.213 EAL: request: mp_malloc_sync 00:04:47.213 EAL: No shared files mode enabled, IPC is disabled 00:04:47.213 EAL: Heap on socket 0 was shrunk by 6MB 00:04:47.213 EAL: Trying to obtain current memory policy. 00:04:47.213 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.213 EAL: Restoring previous memory policy: 4 00:04:47.213 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.213 EAL: request: mp_malloc_sync 00:04:47.213 EAL: No shared files mode enabled, IPC is disabled 00:04:47.213 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.213 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.213 EAL: request: mp_malloc_sync 00:04:47.213 EAL: No shared files mode enabled, IPC is disabled 00:04:47.214 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.214 EAL: Trying to obtain current memory policy. 00:04:47.214 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.214 EAL: Restoring previous memory policy: 4 00:04:47.214 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.214 EAL: request: mp_malloc_sync 00:04:47.214 EAL: No shared files mode enabled, IPC is disabled 00:04:47.214 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.214 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.472 EAL: request: mp_malloc_sync 00:04:47.472 EAL: No shared files mode enabled, IPC is disabled 00:04:47.472 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.472 EAL: Trying to obtain current memory policy. 00:04:47.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.472 EAL: Restoring previous memory policy: 4 00:04:47.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.472 EAL: request: mp_malloc_sync 00:04:47.472 EAL: No shared files mode enabled, IPC is disabled 00:04:47.472 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.472 EAL: request: mp_malloc_sync 00:04:47.472 EAL: No shared files mode enabled, IPC is disabled 00:04:47.472 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.472 EAL: Trying to obtain current memory policy. 00:04:47.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.472 EAL: Restoring previous memory policy: 4 00:04:47.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.472 EAL: request: mp_malloc_sync 00:04:47.472 EAL: No shared files mode enabled, IPC is disabled 00:04:47.472 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.472 EAL: request: mp_malloc_sync 00:04:47.472 EAL: No shared files mode enabled, IPC is disabled 00:04:47.472 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.472 EAL: Trying to obtain current memory policy. 00:04:47.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.472 EAL: Restoring previous memory policy: 4 00:04:47.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.472 EAL: request: mp_malloc_sync 00:04:47.472 EAL: No shared files mode enabled, IPC is disabled 00:04:47.472 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.472 EAL: request: mp_malloc_sync 00:04:47.472 EAL: No shared files mode enabled, IPC is disabled 00:04:47.472 EAL: Heap on socket 0 was shrunk by 130MB 00:04:47.472 EAL: Trying to obtain current memory policy. 00:04:47.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.472 EAL: Restoring previous memory policy: 4 00:04:47.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.472 EAL: request: mp_malloc_sync 00:04:47.472 EAL: No shared files mode enabled, IPC is disabled 00:04:47.472 EAL: Heap on socket 0 was expanded by 258MB 00:04:47.730 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.730 EAL: request: mp_malloc_sync 00:04:47.730 EAL: No shared files mode enabled, IPC is disabled 00:04:47.730 EAL: Heap on socket 0 was shrunk by 258MB 00:04:47.730 EAL: Trying to obtain current memory policy. 00:04:47.730 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.730 EAL: Restoring previous memory policy: 4 00:04:47.730 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.730 EAL: request: mp_malloc_sync 00:04:47.731 EAL: No shared files mode enabled, IPC is disabled 00:04:47.731 EAL: Heap on socket 0 was expanded by 514MB 00:04:47.989 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.989 EAL: request: mp_malloc_sync 00:04:47.989 EAL: No shared files mode enabled, IPC is disabled 00:04:47.989 EAL: Heap on socket 0 was shrunk by 514MB 00:04:47.989 EAL: Trying to obtain current memory policy. 00:04:47.989 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.248 EAL: Restoring previous memory policy: 4 00:04:48.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.248 EAL: request: mp_malloc_sync 00:04:48.248 EAL: No shared files mode enabled, IPC is disabled 00:04:48.248 EAL: Heap on socket 0 was expanded by 1026MB 00:04:48.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.508 passed 00:04:48.508 00:04:48.508 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.508 suites 1 1 n/a 0 0 00:04:48.508 tests 2 2 2 0 0 00:04:48.508 asserts 5330 5330 5330 0 n/a 00:04:48.508 00:04:48.508 Elapsed time = 1.276 seconds 00:04:48.508 EAL: request: mp_malloc_sync 00:04:48.508 EAL: No shared files mode enabled, IPC is disabled 00:04:48.508 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:48.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.768 EAL: request: mp_malloc_sync 00:04:48.768 EAL: No shared files mode enabled, IPC is disabled 00:04:48.768 EAL: Heap on socket 0 was shrunk by 2MB 00:04:48.768 EAL: No shared files mode enabled, IPC is disabled 00:04:48.768 EAL: No shared files mode enabled, IPC is disabled 00:04:48.768 EAL: No shared files mode enabled, IPC is disabled 00:04:48.768 00:04:48.768 real 0m1.489s 00:04:48.768 user 0m0.810s 00:04:48.768 sys 0m0.541s 00:04:48.768 12:29:21 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.768 12:29:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:48.768 ************************************ 00:04:48.768 END TEST env_vtophys 00:04:48.768 ************************************ 00:04:48.768 12:29:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:48.768 12:29:21 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:48.768 12:29:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.768 12:29:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.768 12:29:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.768 ************************************ 00:04:48.768 START TEST env_pci 00:04:48.768 ************************************ 00:04:48.768 12:29:21 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:48.768 00:04:48.768 00:04:48.768 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.768 http://cunit.sourceforge.net/ 00:04:48.768 00:04:48.768 00:04:48.768 Suite: pci 00:04:48.768 Test: pci_hook ...[2024-07-15 12:29:21.254190] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58635 has claimed it 00:04:48.768 passed 00:04:48.768 00:04:48.768 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.768 suites 1 1 n/a 0 0 00:04:48.768 tests 1 1 1 0 0 00:04:48.768 asserts 25 25 25 0 n/a 00:04:48.768 00:04:48.768 Elapsed time = 0.003 seconds 00:04:48.768 EAL: Cannot find device (10000:00:01.0) 00:04:48.768 EAL: Failed to attach device on primary process 00:04:48.768 00:04:48.768 real 0m0.024s 00:04:48.768 user 0m0.008s 00:04:48.768 sys 0m0.015s 00:04:48.768 12:29:21 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.768 12:29:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:48.768 ************************************ 00:04:48.768 END TEST env_pci 00:04:48.768 ************************************ 00:04:48.768 12:29:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:48.768 12:29:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:48.768 12:29:21 env -- env/env.sh@15 -- # uname 00:04:48.768 12:29:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:48.768 12:29:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:48.768 12:29:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:48.768 12:29:21 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:48.768 12:29:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.768 12:29:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.768 ************************************ 00:04:48.768 START TEST env_dpdk_post_init 00:04:48.768 ************************************ 00:04:48.768 12:29:21 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:48.768 EAL: Detected CPU lcores: 10 00:04:48.768 EAL: Detected NUMA nodes: 1 00:04:48.768 EAL: Detected shared linkage of DPDK 00:04:48.768 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.768 EAL: Selected IOVA mode 'PA' 00:04:49.027 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.027 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:49.027 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:49.027 Starting DPDK initialization... 00:04:49.027 Starting SPDK post initialization... 00:04:49.027 SPDK NVMe probe 00:04:49.027 Attaching to 0000:00:10.0 00:04:49.027 Attaching to 0000:00:11.0 00:04:49.027 Attached to 0000:00:10.0 00:04:49.027 Attached to 0000:00:11.0 00:04:49.027 Cleaning up... 00:04:49.027 00:04:49.027 real 0m0.188s 00:04:49.027 user 0m0.055s 00:04:49.027 sys 0m0.032s 00:04:49.027 12:29:21 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.027 12:29:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.027 ************************************ 00:04:49.027 END TEST env_dpdk_post_init 00:04:49.027 ************************************ 00:04:49.027 12:29:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.027 12:29:21 env -- env/env.sh@26 -- # uname 00:04:49.027 12:29:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.027 12:29:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.027 12:29:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.027 12:29:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.027 12:29:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.027 ************************************ 00:04:49.027 START TEST env_mem_callbacks 00:04:49.027 ************************************ 00:04:49.027 12:29:21 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.027 EAL: Detected CPU lcores: 10 00:04:49.027 EAL: Detected NUMA nodes: 1 00:04:49.027 EAL: Detected shared linkage of DPDK 00:04:49.027 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.027 EAL: Selected IOVA mode 'PA' 00:04:49.027 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.027 00:04:49.027 00:04:49.028 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.028 http://cunit.sourceforge.net/ 00:04:49.028 00:04:49.028 00:04:49.028 Suite: memory 00:04:49.028 Test: test ... 00:04:49.028 register 0x200000200000 2097152 00:04:49.028 malloc 3145728 00:04:49.028 register 0x200000400000 4194304 00:04:49.028 buf 0x200000500000 len 3145728 PASSED 00:04:49.028 malloc 64 00:04:49.028 buf 0x2000004fff40 len 64 PASSED 00:04:49.028 malloc 4194304 00:04:49.028 register 0x200000800000 6291456 00:04:49.028 buf 0x200000a00000 len 4194304 PASSED 00:04:49.028 free 0x200000500000 3145728 00:04:49.028 free 0x2000004fff40 64 00:04:49.028 unregister 0x200000400000 4194304 PASSED 00:04:49.028 free 0x200000a00000 4194304 00:04:49.028 unregister 0x200000800000 6291456 PASSED 00:04:49.028 malloc 8388608 00:04:49.028 register 0x200000400000 10485760 00:04:49.028 buf 0x200000600000 len 8388608 PASSED 00:04:49.028 free 0x200000600000 8388608 00:04:49.028 unregister 0x200000400000 10485760 PASSED 00:04:49.028 passed 00:04:49.028 00:04:49.028 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.028 suites 1 1 n/a 0 0 00:04:49.028 tests 1 1 1 0 0 00:04:49.028 asserts 15 15 15 0 n/a 00:04:49.028 00:04:49.028 Elapsed time = 0.009 seconds 00:04:49.028 00:04:49.028 real 0m0.143s 00:04:49.028 user 0m0.015s 00:04:49.028 sys 0m0.027s 00:04:49.028 12:29:21 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.028 ************************************ 00:04:49.028 12:29:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:49.028 END TEST env_mem_callbacks 00:04:49.028 ************************************ 00:04:49.286 12:29:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.286 ************************************ 00:04:49.286 END TEST env 00:04:49.286 ************************************ 00:04:49.286 00:04:49.286 real 0m2.395s 00:04:49.286 user 0m1.228s 00:04:49.286 sys 0m0.820s 00:04:49.286 12:29:21 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.286 12:29:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.286 12:29:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.286 12:29:21 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:49.286 12:29:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.286 12:29:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.286 12:29:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.286 ************************************ 00:04:49.286 START TEST rpc 00:04:49.286 ************************************ 00:04:49.286 12:29:21 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:49.286 * Looking for test storage... 00:04:49.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:49.286 12:29:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58750 00:04:49.286 12:29:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:49.286 12:29:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.286 12:29:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58750 00:04:49.286 12:29:21 rpc -- common/autotest_common.sh@829 -- # '[' -z 58750 ']' 00:04:49.286 12:29:21 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.286 12:29:21 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.286 12:29:21 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.287 12:29:21 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.287 12:29:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.287 [2024-07-15 12:29:21.937613] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:49.287 [2024-07-15 12:29:21.937754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58750 ] 00:04:49.545 [2024-07-15 12:29:22.081994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.803 [2024-07-15 12:29:22.239495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.803 [2024-07-15 12:29:22.239567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58750' to capture a snapshot of events at runtime. 00:04:49.803 [2024-07-15 12:29:22.239583] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.803 [2024-07-15 12:29:22.239595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.803 [2024-07-15 12:29:22.239604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58750 for offline analysis/debug. 00:04:49.803 [2024-07-15 12:29:22.239646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.803 [2024-07-15 12:29:22.298578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:50.371 12:29:22 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.371 12:29:22 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:50.371 12:29:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.371 12:29:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.371 12:29:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.371 12:29:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.371 12:29:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.371 12:29:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.371 12:29:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.371 ************************************ 00:04:50.371 START TEST rpc_integrity 00:04:50.371 ************************************ 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:50.371 12:29:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.371 12:29:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.371 12:29:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.371 12:29:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.371 12:29:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.371 12:29:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.371 12:29:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.371 12:29:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.371 12:29:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.371 { 00:04:50.371 "name": "Malloc0", 00:04:50.371 "aliases": [ 00:04:50.371 "db78b931-bd1b-479c-87f1-64b9cfa7a36d" 00:04:50.371 ], 00:04:50.371 "product_name": "Malloc disk", 00:04:50.371 "block_size": 512, 00:04:50.371 "num_blocks": 16384, 00:04:50.371 "uuid": "db78b931-bd1b-479c-87f1-64b9cfa7a36d", 00:04:50.371 "assigned_rate_limits": { 00:04:50.371 "rw_ios_per_sec": 0, 00:04:50.371 "rw_mbytes_per_sec": 0, 00:04:50.371 "r_mbytes_per_sec": 0, 00:04:50.371 "w_mbytes_per_sec": 0 00:04:50.371 }, 00:04:50.371 "claimed": false, 00:04:50.371 "zoned": false, 00:04:50.371 "supported_io_types": { 00:04:50.371 "read": true, 00:04:50.371 "write": true, 00:04:50.371 "unmap": true, 00:04:50.371 "flush": true, 00:04:50.371 "reset": true, 00:04:50.371 "nvme_admin": false, 00:04:50.371 "nvme_io": false, 00:04:50.371 "nvme_io_md": false, 00:04:50.371 "write_zeroes": true, 00:04:50.371 "zcopy": true, 00:04:50.371 "get_zone_info": false, 00:04:50.371 "zone_management": false, 00:04:50.371 "zone_append": false, 00:04:50.371 "compare": false, 00:04:50.371 "compare_and_write": false, 00:04:50.371 "abort": true, 00:04:50.371 "seek_hole": false, 00:04:50.371 "seek_data": false, 00:04:50.371 "copy": true, 00:04:50.371 "nvme_iov_md": false 00:04:50.371 }, 00:04:50.371 "memory_domains": [ 00:04:50.371 { 00:04:50.371 "dma_device_id": "system", 00:04:50.371 "dma_device_type": 1 00:04:50.371 }, 00:04:50.371 { 00:04:50.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.371 "dma_device_type": 2 00:04:50.371 } 00:04:50.371 ], 00:04:50.371 "driver_specific": {} 00:04:50.371 } 00:04:50.371 ]' 00:04:50.371 12:29:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.371 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.371 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.371 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.371 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.371 [2024-07-15 12:29:23.023667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.371 [2024-07-15 12:29:23.023725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.371 [2024-07-15 12:29:23.023766] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be1da0 00:04:50.371 [2024-07-15 12:29:23.023778] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.371 [2024-07-15 12:29:23.025562] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.371 [2024-07-15 12:29:23.025598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.371 Passthru0 00:04:50.371 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.371 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.371 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.371 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.631 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.631 { 00:04:50.631 "name": "Malloc0", 00:04:50.631 "aliases": [ 00:04:50.631 "db78b931-bd1b-479c-87f1-64b9cfa7a36d" 00:04:50.631 ], 00:04:50.631 "product_name": "Malloc disk", 00:04:50.631 "block_size": 512, 00:04:50.631 "num_blocks": 16384, 00:04:50.631 "uuid": "db78b931-bd1b-479c-87f1-64b9cfa7a36d", 00:04:50.631 "assigned_rate_limits": { 00:04:50.631 "rw_ios_per_sec": 0, 00:04:50.631 "rw_mbytes_per_sec": 0, 00:04:50.631 "r_mbytes_per_sec": 0, 00:04:50.631 "w_mbytes_per_sec": 0 00:04:50.631 }, 00:04:50.631 "claimed": true, 00:04:50.631 "claim_type": "exclusive_write", 00:04:50.631 "zoned": false, 00:04:50.631 "supported_io_types": { 00:04:50.631 "read": true, 00:04:50.631 "write": true, 00:04:50.631 "unmap": true, 00:04:50.631 "flush": true, 00:04:50.631 "reset": true, 00:04:50.631 "nvme_admin": false, 00:04:50.631 "nvme_io": false, 00:04:50.631 "nvme_io_md": false, 00:04:50.631 "write_zeroes": true, 00:04:50.631 "zcopy": true, 00:04:50.631 "get_zone_info": false, 00:04:50.631 "zone_management": false, 00:04:50.631 "zone_append": false, 00:04:50.631 "compare": false, 00:04:50.631 "compare_and_write": false, 00:04:50.631 "abort": true, 00:04:50.631 "seek_hole": false, 00:04:50.631 "seek_data": false, 00:04:50.631 "copy": true, 00:04:50.631 "nvme_iov_md": false 00:04:50.631 }, 00:04:50.631 "memory_domains": [ 00:04:50.631 { 00:04:50.631 "dma_device_id": "system", 00:04:50.631 "dma_device_type": 1 00:04:50.631 }, 00:04:50.631 { 00:04:50.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.631 "dma_device_type": 2 00:04:50.631 } 00:04:50.631 ], 00:04:50.631 "driver_specific": {} 00:04:50.631 }, 00:04:50.631 { 00:04:50.631 "name": "Passthru0", 00:04:50.631 "aliases": [ 00:04:50.631 "4b99e5eb-f1db-54bd-8b1f-49b32a800ef3" 00:04:50.631 ], 00:04:50.631 "product_name": "passthru", 00:04:50.631 "block_size": 512, 00:04:50.631 "num_blocks": 16384, 00:04:50.631 "uuid": "4b99e5eb-f1db-54bd-8b1f-49b32a800ef3", 00:04:50.631 "assigned_rate_limits": { 00:04:50.631 "rw_ios_per_sec": 0, 00:04:50.631 "rw_mbytes_per_sec": 0, 00:04:50.631 "r_mbytes_per_sec": 0, 00:04:50.631 "w_mbytes_per_sec": 0 00:04:50.631 }, 00:04:50.631 "claimed": false, 00:04:50.631 "zoned": false, 00:04:50.631 "supported_io_types": { 00:04:50.631 "read": true, 00:04:50.631 "write": true, 00:04:50.631 "unmap": true, 00:04:50.631 "flush": true, 00:04:50.631 "reset": true, 00:04:50.631 "nvme_admin": false, 00:04:50.631 "nvme_io": false, 00:04:50.631 "nvme_io_md": false, 00:04:50.631 "write_zeroes": true, 00:04:50.631 "zcopy": true, 00:04:50.631 "get_zone_info": false, 00:04:50.631 "zone_management": false, 00:04:50.631 "zone_append": false, 00:04:50.631 "compare": false, 00:04:50.631 "compare_and_write": false, 00:04:50.631 "abort": true, 00:04:50.631 "seek_hole": false, 00:04:50.631 "seek_data": false, 00:04:50.631 "copy": true, 00:04:50.631 "nvme_iov_md": false 00:04:50.631 }, 00:04:50.631 "memory_domains": [ 00:04:50.631 { 00:04:50.631 "dma_device_id": "system", 00:04:50.631 "dma_device_type": 1 00:04:50.631 }, 00:04:50.631 { 00:04:50.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.631 "dma_device_type": 2 00:04:50.631 } 00:04:50.631 ], 00:04:50.631 "driver_specific": { 00:04:50.631 "passthru": { 00:04:50.631 "name": "Passthru0", 00:04:50.631 "base_bdev_name": "Malloc0" 00:04:50.631 } 00:04:50.631 } 00:04:50.631 } 00:04:50.631 ]' 00:04:50.631 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.631 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.631 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.631 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.631 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.631 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.631 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.631 ************************************ 00:04:50.631 END TEST rpc_integrity 00:04:50.631 ************************************ 00:04:50.631 12:29:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.631 00:04:50.631 real 0m0.318s 00:04:50.631 user 0m0.212s 00:04:50.631 sys 0m0.035s 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.631 12:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.631 12:29:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.631 12:29:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.631 12:29:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.631 12:29:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.631 12:29:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.631 ************************************ 00:04:50.631 START TEST rpc_plugins 00:04:50.631 ************************************ 00:04:50.631 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:50.631 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:50.631 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.631 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.631 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.631 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:50.631 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:50.631 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.631 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.631 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.631 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:50.631 { 00:04:50.631 "name": "Malloc1", 00:04:50.631 "aliases": [ 00:04:50.631 "881478f5-4572-4f0c-a1fb-d5342dd39f48" 00:04:50.631 ], 00:04:50.631 "product_name": "Malloc disk", 00:04:50.632 "block_size": 4096, 00:04:50.632 "num_blocks": 256, 00:04:50.632 "uuid": "881478f5-4572-4f0c-a1fb-d5342dd39f48", 00:04:50.632 "assigned_rate_limits": { 00:04:50.632 "rw_ios_per_sec": 0, 00:04:50.632 "rw_mbytes_per_sec": 0, 00:04:50.632 "r_mbytes_per_sec": 0, 00:04:50.632 "w_mbytes_per_sec": 0 00:04:50.632 }, 00:04:50.632 "claimed": false, 00:04:50.632 "zoned": false, 00:04:50.632 "supported_io_types": { 00:04:50.632 "read": true, 00:04:50.632 "write": true, 00:04:50.632 "unmap": true, 00:04:50.632 "flush": true, 00:04:50.632 "reset": true, 00:04:50.632 "nvme_admin": false, 00:04:50.632 "nvme_io": false, 00:04:50.632 "nvme_io_md": false, 00:04:50.632 "write_zeroes": true, 00:04:50.632 "zcopy": true, 00:04:50.632 "get_zone_info": false, 00:04:50.632 "zone_management": false, 00:04:50.632 "zone_append": false, 00:04:50.632 "compare": false, 00:04:50.632 "compare_and_write": false, 00:04:50.632 "abort": true, 00:04:50.632 "seek_hole": false, 00:04:50.632 "seek_data": false, 00:04:50.632 "copy": true, 00:04:50.632 "nvme_iov_md": false 00:04:50.632 }, 00:04:50.632 "memory_domains": [ 00:04:50.632 { 00:04:50.632 "dma_device_id": "system", 00:04:50.632 "dma_device_type": 1 00:04:50.632 }, 00:04:50.632 { 00:04:50.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.632 "dma_device_type": 2 00:04:50.632 } 00:04:50.632 ], 00:04:50.632 "driver_specific": {} 00:04:50.632 } 00:04:50.632 ]' 00:04:50.632 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:50.890 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:50.890 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:50.890 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.890 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.890 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.890 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:50.890 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.890 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.890 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.890 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:50.890 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:50.890 ************************************ 00:04:50.890 END TEST rpc_plugins 00:04:50.890 ************************************ 00:04:50.890 12:29:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:50.890 00:04:50.890 real 0m0.150s 00:04:50.890 user 0m0.094s 00:04:50.890 sys 0m0.019s 00:04:50.890 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.890 12:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.890 12:29:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.890 12:29:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:50.890 12:29:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.890 12:29:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.890 12:29:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.890 ************************************ 00:04:50.890 START TEST rpc_trace_cmd_test 00:04:50.890 ************************************ 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:50.890 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58750", 00:04:50.890 "tpoint_group_mask": "0x8", 00:04:50.890 "iscsi_conn": { 00:04:50.890 "mask": "0x2", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "scsi": { 00:04:50.890 "mask": "0x4", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "bdev": { 00:04:50.890 "mask": "0x8", 00:04:50.890 "tpoint_mask": "0xffffffffffffffff" 00:04:50.890 }, 00:04:50.890 "nvmf_rdma": { 00:04:50.890 "mask": "0x10", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "nvmf_tcp": { 00:04:50.890 "mask": "0x20", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "ftl": { 00:04:50.890 "mask": "0x40", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "blobfs": { 00:04:50.890 "mask": "0x80", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "dsa": { 00:04:50.890 "mask": "0x200", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "thread": { 00:04:50.890 "mask": "0x400", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "nvme_pcie": { 00:04:50.890 "mask": "0x800", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "iaa": { 00:04:50.890 "mask": "0x1000", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "nvme_tcp": { 00:04:50.890 "mask": "0x2000", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "bdev_nvme": { 00:04:50.890 "mask": "0x4000", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 }, 00:04:50.890 "sock": { 00:04:50.890 "mask": "0x8000", 00:04:50.890 "tpoint_mask": "0x0" 00:04:50.890 } 00:04:50.890 }' 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:50.890 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.149 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.149 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.149 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:51.149 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:51.149 ************************************ 00:04:51.149 END TEST rpc_trace_cmd_test 00:04:51.149 ************************************ 00:04:51.149 12:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:51.149 00:04:51.149 real 0m0.245s 00:04:51.149 user 0m0.208s 00:04:51.149 sys 0m0.028s 00:04:51.149 12:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.149 12:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.149 12:29:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.149 12:29:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:51.149 12:29:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:51.149 12:29:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:51.149 12:29:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.149 12:29:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.149 12:29:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.149 ************************************ 00:04:51.149 START TEST rpc_daemon_integrity 00:04:51.149 ************************************ 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.149 { 00:04:51.149 "name": "Malloc2", 00:04:51.149 "aliases": [ 00:04:51.149 "a810dfb5-c11f-491c-b6f2-4f4728329c50" 00:04:51.149 ], 00:04:51.149 "product_name": "Malloc disk", 00:04:51.149 "block_size": 512, 00:04:51.149 "num_blocks": 16384, 00:04:51.149 "uuid": "a810dfb5-c11f-491c-b6f2-4f4728329c50", 00:04:51.149 "assigned_rate_limits": { 00:04:51.149 "rw_ios_per_sec": 0, 00:04:51.149 "rw_mbytes_per_sec": 0, 00:04:51.149 "r_mbytes_per_sec": 0, 00:04:51.149 "w_mbytes_per_sec": 0 00:04:51.149 }, 00:04:51.149 "claimed": false, 00:04:51.149 "zoned": false, 00:04:51.149 "supported_io_types": { 00:04:51.149 "read": true, 00:04:51.149 "write": true, 00:04:51.149 "unmap": true, 00:04:51.149 "flush": true, 00:04:51.149 "reset": true, 00:04:51.149 "nvme_admin": false, 00:04:51.149 "nvme_io": false, 00:04:51.149 "nvme_io_md": false, 00:04:51.149 "write_zeroes": true, 00:04:51.149 "zcopy": true, 00:04:51.149 "get_zone_info": false, 00:04:51.149 "zone_management": false, 00:04:51.149 "zone_append": false, 00:04:51.149 "compare": false, 00:04:51.149 "compare_and_write": false, 00:04:51.149 "abort": true, 00:04:51.149 "seek_hole": false, 00:04:51.149 "seek_data": false, 00:04:51.149 "copy": true, 00:04:51.149 "nvme_iov_md": false 00:04:51.149 }, 00:04:51.149 "memory_domains": [ 00:04:51.149 { 00:04:51.149 "dma_device_id": "system", 00:04:51.149 "dma_device_type": 1 00:04:51.149 }, 00:04:51.149 { 00:04:51.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.149 "dma_device_type": 2 00:04:51.149 } 00:04:51.149 ], 00:04:51.149 "driver_specific": {} 00:04:51.149 } 00:04:51.149 ]' 00:04:51.149 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.408 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.408 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:51.408 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.408 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.408 [2024-07-15 12:29:23.868271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:51.409 [2024-07-15 12:29:23.868327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.409 [2024-07-15 12:29:23.868352] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c46be0 00:04:51.409 [2024-07-15 12:29:23.868363] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.409 [2024-07-15 12:29:23.870045] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.409 [2024-07-15 12:29:23.870082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.409 Passthru0 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.409 { 00:04:51.409 "name": "Malloc2", 00:04:51.409 "aliases": [ 00:04:51.409 "a810dfb5-c11f-491c-b6f2-4f4728329c50" 00:04:51.409 ], 00:04:51.409 "product_name": "Malloc disk", 00:04:51.409 "block_size": 512, 00:04:51.409 "num_blocks": 16384, 00:04:51.409 "uuid": "a810dfb5-c11f-491c-b6f2-4f4728329c50", 00:04:51.409 "assigned_rate_limits": { 00:04:51.409 "rw_ios_per_sec": 0, 00:04:51.409 "rw_mbytes_per_sec": 0, 00:04:51.409 "r_mbytes_per_sec": 0, 00:04:51.409 "w_mbytes_per_sec": 0 00:04:51.409 }, 00:04:51.409 "claimed": true, 00:04:51.409 "claim_type": "exclusive_write", 00:04:51.409 "zoned": false, 00:04:51.409 "supported_io_types": { 00:04:51.409 "read": true, 00:04:51.409 "write": true, 00:04:51.409 "unmap": true, 00:04:51.409 "flush": true, 00:04:51.409 "reset": true, 00:04:51.409 "nvme_admin": false, 00:04:51.409 "nvme_io": false, 00:04:51.409 "nvme_io_md": false, 00:04:51.409 "write_zeroes": true, 00:04:51.409 "zcopy": true, 00:04:51.409 "get_zone_info": false, 00:04:51.409 "zone_management": false, 00:04:51.409 "zone_append": false, 00:04:51.409 "compare": false, 00:04:51.409 "compare_and_write": false, 00:04:51.409 "abort": true, 00:04:51.409 "seek_hole": false, 00:04:51.409 "seek_data": false, 00:04:51.409 "copy": true, 00:04:51.409 "nvme_iov_md": false 00:04:51.409 }, 00:04:51.409 "memory_domains": [ 00:04:51.409 { 00:04:51.409 "dma_device_id": "system", 00:04:51.409 "dma_device_type": 1 00:04:51.409 }, 00:04:51.409 { 00:04:51.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.409 "dma_device_type": 2 00:04:51.409 } 00:04:51.409 ], 00:04:51.409 "driver_specific": {} 00:04:51.409 }, 00:04:51.409 { 00:04:51.409 "name": "Passthru0", 00:04:51.409 "aliases": [ 00:04:51.409 "5c881824-1c05-5dd8-8428-3c3c43e5e290" 00:04:51.409 ], 00:04:51.409 "product_name": "passthru", 00:04:51.409 "block_size": 512, 00:04:51.409 "num_blocks": 16384, 00:04:51.409 "uuid": "5c881824-1c05-5dd8-8428-3c3c43e5e290", 00:04:51.409 "assigned_rate_limits": { 00:04:51.409 "rw_ios_per_sec": 0, 00:04:51.409 "rw_mbytes_per_sec": 0, 00:04:51.409 "r_mbytes_per_sec": 0, 00:04:51.409 "w_mbytes_per_sec": 0 00:04:51.409 }, 00:04:51.409 "claimed": false, 00:04:51.409 "zoned": false, 00:04:51.409 "supported_io_types": { 00:04:51.409 "read": true, 00:04:51.409 "write": true, 00:04:51.409 "unmap": true, 00:04:51.409 "flush": true, 00:04:51.409 "reset": true, 00:04:51.409 "nvme_admin": false, 00:04:51.409 "nvme_io": false, 00:04:51.409 "nvme_io_md": false, 00:04:51.409 "write_zeroes": true, 00:04:51.409 "zcopy": true, 00:04:51.409 "get_zone_info": false, 00:04:51.409 "zone_management": false, 00:04:51.409 "zone_append": false, 00:04:51.409 "compare": false, 00:04:51.409 "compare_and_write": false, 00:04:51.409 "abort": true, 00:04:51.409 "seek_hole": false, 00:04:51.409 "seek_data": false, 00:04:51.409 "copy": true, 00:04:51.409 "nvme_iov_md": false 00:04:51.409 }, 00:04:51.409 "memory_domains": [ 00:04:51.409 { 00:04:51.409 "dma_device_id": "system", 00:04:51.409 "dma_device_type": 1 00:04:51.409 }, 00:04:51.409 { 00:04:51.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.409 "dma_device_type": 2 00:04:51.409 } 00:04:51.409 ], 00:04:51.409 "driver_specific": { 00:04:51.409 "passthru": { 00:04:51.409 "name": "Passthru0", 00:04:51.409 "base_bdev_name": "Malloc2" 00:04:51.409 } 00:04:51.409 } 00:04:51.409 } 00:04:51.409 ]' 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.409 12:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.409 ************************************ 00:04:51.409 END TEST rpc_daemon_integrity 00:04:51.409 ************************************ 00:04:51.409 12:29:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.409 00:04:51.409 real 0m0.296s 00:04:51.410 user 0m0.189s 00:04:51.410 sys 0m0.040s 00:04:51.410 12:29:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.410 12:29:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.410 12:29:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:51.410 12:29:24 rpc -- rpc/rpc.sh@84 -- # killprocess 58750 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@948 -- # '[' -z 58750 ']' 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@952 -- # kill -0 58750 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@953 -- # uname 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58750 00:04:51.410 killing process with pid 58750 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58750' 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@967 -- # kill 58750 00:04:51.410 12:29:24 rpc -- common/autotest_common.sh@972 -- # wait 58750 00:04:51.999 00:04:51.999 real 0m2.705s 00:04:51.999 user 0m3.421s 00:04:51.999 sys 0m0.669s 00:04:51.999 12:29:24 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.999 12:29:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.999 ************************************ 00:04:51.999 END TEST rpc 00:04:51.999 ************************************ 00:04:51.999 12:29:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.999 12:29:24 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:51.999 12:29:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.999 12:29:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.999 12:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:51.999 ************************************ 00:04:51.999 START TEST skip_rpc 00:04:51.999 ************************************ 00:04:51.999 12:29:24 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:51.999 * Looking for test storage... 00:04:51.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:51.999 12:29:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.999 12:29:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:51.999 12:29:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:51.999 12:29:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.999 12:29:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.999 12:29:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.999 ************************************ 00:04:51.999 START TEST skip_rpc 00:04:51.999 ************************************ 00:04:51.999 12:29:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:51.999 12:29:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58937 00:04:51.999 12:29:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.999 12:29:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:51.999 12:29:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:52.258 [2024-07-15 12:29:24.688327] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:52.258 [2024-07-15 12:29:24.688438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58937 ] 00:04:52.258 [2024-07-15 12:29:24.829749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.517 [2024-07-15 12:29:24.957984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.517 [2024-07-15 12:29:25.014759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58937 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58937 ']' 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58937 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58937 00:04:57.825 killing process with pid 58937 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58937' 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58937 00:04:57.825 12:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58937 00:04:57.825 00:04:57.825 real 0m5.432s 00:04:57.825 user 0m5.046s 00:04:57.825 sys 0m0.285s 00:04:57.825 ************************************ 00:04:57.825 END TEST skip_rpc 00:04:57.825 ************************************ 00:04:57.825 12:29:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.825 12:29:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.825 12:29:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.825 12:29:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:57.825 12:29:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.825 12:29:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.825 12:29:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.825 ************************************ 00:04:57.825 START TEST skip_rpc_with_json 00:04:57.825 ************************************ 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59029 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59029 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59029 ']' 00:04:57.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.825 12:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.825 [2024-07-15 12:29:30.171863] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:57.825 [2024-07-15 12:29:30.171966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59029 ] 00:04:57.826 [2024-07-15 12:29:30.307019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.826 [2024-07-15 12:29:30.426987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.826 [2024-07-15 12:29:30.481199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.800 [2024-07-15 12:29:31.207283] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:58.800 request: 00:04:58.800 { 00:04:58.800 "trtype": "tcp", 00:04:58.800 "method": "nvmf_get_transports", 00:04:58.800 "req_id": 1 00:04:58.800 } 00:04:58.800 Got JSON-RPC error response 00:04:58.800 response: 00:04:58.800 { 00:04:58.800 "code": -19, 00:04:58.800 "message": "No such device" 00:04:58.800 } 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.800 [2024-07-15 12:29:31.219378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.800 12:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:58.800 { 00:04:58.800 "subsystems": [ 00:04:58.800 { 00:04:58.800 "subsystem": "keyring", 00:04:58.800 "config": [] 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "subsystem": "iobuf", 00:04:58.800 "config": [ 00:04:58.800 { 00:04:58.800 "method": "iobuf_set_options", 00:04:58.800 "params": { 00:04:58.800 "small_pool_count": 8192, 00:04:58.800 "large_pool_count": 1024, 00:04:58.800 "small_bufsize": 8192, 00:04:58.800 "large_bufsize": 135168 00:04:58.800 } 00:04:58.800 } 00:04:58.800 ] 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "subsystem": "sock", 00:04:58.800 "config": [ 00:04:58.800 { 00:04:58.800 "method": "sock_set_default_impl", 00:04:58.800 "params": { 00:04:58.800 "impl_name": "uring" 00:04:58.800 } 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "method": "sock_impl_set_options", 00:04:58.800 "params": { 00:04:58.800 "impl_name": "ssl", 00:04:58.800 "recv_buf_size": 4096, 00:04:58.800 "send_buf_size": 4096, 00:04:58.800 "enable_recv_pipe": true, 00:04:58.800 "enable_quickack": false, 00:04:58.800 "enable_placement_id": 0, 00:04:58.800 "enable_zerocopy_send_server": true, 00:04:58.800 "enable_zerocopy_send_client": false, 00:04:58.800 "zerocopy_threshold": 0, 00:04:58.800 "tls_version": 0, 00:04:58.800 "enable_ktls": false 00:04:58.800 } 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "method": "sock_impl_set_options", 00:04:58.800 "params": { 00:04:58.800 "impl_name": "posix", 00:04:58.800 "recv_buf_size": 2097152, 00:04:58.800 "send_buf_size": 2097152, 00:04:58.800 "enable_recv_pipe": true, 00:04:58.800 "enable_quickack": false, 00:04:58.800 "enable_placement_id": 0, 00:04:58.800 "enable_zerocopy_send_server": true, 00:04:58.800 "enable_zerocopy_send_client": false, 00:04:58.800 "zerocopy_threshold": 0, 00:04:58.800 "tls_version": 0, 00:04:58.800 "enable_ktls": false 00:04:58.800 } 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "method": "sock_impl_set_options", 00:04:58.800 "params": { 00:04:58.800 "impl_name": "uring", 00:04:58.800 "recv_buf_size": 2097152, 00:04:58.800 "send_buf_size": 2097152, 00:04:58.800 "enable_recv_pipe": true, 00:04:58.800 "enable_quickack": false, 00:04:58.800 "enable_placement_id": 0, 00:04:58.800 "enable_zerocopy_send_server": false, 00:04:58.800 "enable_zerocopy_send_client": false, 00:04:58.800 "zerocopy_threshold": 0, 00:04:58.800 "tls_version": 0, 00:04:58.800 "enable_ktls": false 00:04:58.800 } 00:04:58.800 } 00:04:58.800 ] 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "subsystem": "vmd", 00:04:58.800 "config": [] 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "subsystem": "accel", 00:04:58.800 "config": [ 00:04:58.800 { 00:04:58.800 "method": "accel_set_options", 00:04:58.800 "params": { 00:04:58.800 "small_cache_size": 128, 00:04:58.800 "large_cache_size": 16, 00:04:58.800 "task_count": 2048, 00:04:58.800 "sequence_count": 2048, 00:04:58.800 "buf_count": 2048 00:04:58.800 } 00:04:58.800 } 00:04:58.800 ] 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "subsystem": "bdev", 00:04:58.800 "config": [ 00:04:58.800 { 00:04:58.800 "method": "bdev_set_options", 00:04:58.800 "params": { 00:04:58.800 "bdev_io_pool_size": 65535, 00:04:58.800 "bdev_io_cache_size": 256, 00:04:58.800 "bdev_auto_examine": true, 00:04:58.800 "iobuf_small_cache_size": 128, 00:04:58.800 "iobuf_large_cache_size": 16 00:04:58.800 } 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "method": "bdev_raid_set_options", 00:04:58.800 "params": { 00:04:58.800 "process_window_size_kb": 1024 00:04:58.800 } 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "method": "bdev_iscsi_set_options", 00:04:58.800 "params": { 00:04:58.800 "timeout_sec": 30 00:04:58.800 } 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "method": "bdev_nvme_set_options", 00:04:58.800 "params": { 00:04:58.800 "action_on_timeout": "none", 00:04:58.800 "timeout_us": 0, 00:04:58.800 "timeout_admin_us": 0, 00:04:58.800 "keep_alive_timeout_ms": 10000, 00:04:58.800 "arbitration_burst": 0, 00:04:58.800 "low_priority_weight": 0, 00:04:58.800 "medium_priority_weight": 0, 00:04:58.800 "high_priority_weight": 0, 00:04:58.800 "nvme_adminq_poll_period_us": 10000, 00:04:58.800 "nvme_ioq_poll_period_us": 0, 00:04:58.800 "io_queue_requests": 0, 00:04:58.800 "delay_cmd_submit": true, 00:04:58.800 "transport_retry_count": 4, 00:04:58.800 "bdev_retry_count": 3, 00:04:58.800 "transport_ack_timeout": 0, 00:04:58.800 "ctrlr_loss_timeout_sec": 0, 00:04:58.800 "reconnect_delay_sec": 0, 00:04:58.800 "fast_io_fail_timeout_sec": 0, 00:04:58.800 "disable_auto_failback": false, 00:04:58.800 "generate_uuids": false, 00:04:58.800 "transport_tos": 0, 00:04:58.800 "nvme_error_stat": false, 00:04:58.800 "rdma_srq_size": 0, 00:04:58.800 "io_path_stat": false, 00:04:58.800 "allow_accel_sequence": false, 00:04:58.800 "rdma_max_cq_size": 0, 00:04:58.800 "rdma_cm_event_timeout_ms": 0, 00:04:58.800 "dhchap_digests": [ 00:04:58.800 "sha256", 00:04:58.800 "sha384", 00:04:58.800 "sha512" 00:04:58.800 ], 00:04:58.800 "dhchap_dhgroups": [ 00:04:58.800 "null", 00:04:58.800 "ffdhe2048", 00:04:58.800 "ffdhe3072", 00:04:58.800 "ffdhe4096", 00:04:58.800 "ffdhe6144", 00:04:58.800 "ffdhe8192" 00:04:58.800 ] 00:04:58.800 } 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "method": "bdev_nvme_set_hotplug", 00:04:58.800 "params": { 00:04:58.800 "period_us": 100000, 00:04:58.800 "enable": false 00:04:58.800 } 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "method": "bdev_wait_for_examine" 00:04:58.800 } 00:04:58.800 ] 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "subsystem": "scsi", 00:04:58.800 "config": null 00:04:58.800 }, 00:04:58.800 { 00:04:58.800 "subsystem": "scheduler", 00:04:58.801 "config": [ 00:04:58.801 { 00:04:58.801 "method": "framework_set_scheduler", 00:04:58.801 "params": { 00:04:58.801 "name": "static" 00:04:58.801 } 00:04:58.801 } 00:04:58.801 ] 00:04:58.801 }, 00:04:58.801 { 00:04:58.801 "subsystem": "vhost_scsi", 00:04:58.801 "config": [] 00:04:58.801 }, 00:04:58.801 { 00:04:58.801 "subsystem": "vhost_blk", 00:04:58.801 "config": [] 00:04:58.801 }, 00:04:58.801 { 00:04:58.801 "subsystem": "ublk", 00:04:58.801 "config": [] 00:04:58.801 }, 00:04:58.801 { 00:04:58.801 "subsystem": "nbd", 00:04:58.801 "config": [] 00:04:58.801 }, 00:04:58.801 { 00:04:58.801 "subsystem": "nvmf", 00:04:58.801 "config": [ 00:04:58.801 { 00:04:58.801 "method": "nvmf_set_config", 00:04:58.801 "params": { 00:04:58.801 "discovery_filter": "match_any", 00:04:58.801 "admin_cmd_passthru": { 00:04:58.801 "identify_ctrlr": false 00:04:58.801 } 00:04:58.801 } 00:04:58.801 }, 00:04:58.801 { 00:04:58.801 "method": "nvmf_set_max_subsystems", 00:04:58.801 "params": { 00:04:58.801 "max_subsystems": 1024 00:04:58.801 } 00:04:58.801 }, 00:04:58.801 { 00:04:58.801 "method": "nvmf_set_crdt", 00:04:58.801 "params": { 00:04:58.801 "crdt1": 0, 00:04:58.801 "crdt2": 0, 00:04:58.801 "crdt3": 0 00:04:58.801 } 00:04:58.801 }, 00:04:58.801 { 00:04:58.801 "method": "nvmf_create_transport", 00:04:58.801 "params": { 00:04:58.801 "trtype": "TCP", 00:04:58.801 "max_queue_depth": 128, 00:04:58.801 "max_io_qpairs_per_ctrlr": 127, 00:04:58.801 "in_capsule_data_size": 4096, 00:04:58.801 "max_io_size": 131072, 00:04:58.801 "io_unit_size": 131072, 00:04:58.801 "max_aq_depth": 128, 00:04:58.801 "num_shared_buffers": 511, 00:04:58.801 "buf_cache_size": 4294967295, 00:04:58.801 "dif_insert_or_strip": false, 00:04:58.801 "zcopy": false, 00:04:58.801 "c2h_success": true, 00:04:58.801 "sock_priority": 0, 00:04:58.801 "abort_timeout_sec": 1, 00:04:58.801 "ack_timeout": 0, 00:04:58.801 "data_wr_pool_size": 0 00:04:58.801 } 00:04:58.801 } 00:04:58.801 ] 00:04:58.801 }, 00:04:58.801 { 00:04:58.801 "subsystem": "iscsi", 00:04:58.801 "config": [ 00:04:58.801 { 00:04:58.801 "method": "iscsi_set_options", 00:04:58.801 "params": { 00:04:58.801 "node_base": "iqn.2016-06.io.spdk", 00:04:58.801 "max_sessions": 128, 00:04:58.801 "max_connections_per_session": 2, 00:04:58.801 "max_queue_depth": 64, 00:04:58.801 "default_time2wait": 2, 00:04:58.801 "default_time2retain": 20, 00:04:58.801 "first_burst_length": 8192, 00:04:58.801 "immediate_data": true, 00:04:58.801 "allow_duplicated_isid": false, 00:04:58.801 "error_recovery_level": 0, 00:04:58.801 "nop_timeout": 60, 00:04:58.801 "nop_in_interval": 30, 00:04:58.801 "disable_chap": false, 00:04:58.801 "require_chap": false, 00:04:58.801 "mutual_chap": false, 00:04:58.801 "chap_group": 0, 00:04:58.801 "max_large_datain_per_connection": 64, 00:04:58.801 "max_r2t_per_connection": 4, 00:04:58.801 "pdu_pool_size": 36864, 00:04:58.801 "immediate_data_pool_size": 16384, 00:04:58.801 "data_out_pool_size": 2048 00:04:58.801 } 00:04:58.801 } 00:04:58.801 ] 00:04:58.801 } 00:04:58.801 ] 00:04:58.801 } 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59029 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59029 ']' 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59029 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59029 00:04:58.801 killing process with pid 59029 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59029' 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59029 00:04:58.801 12:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59029 00:04:59.369 12:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59051 00:04:59.369 12:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:59.369 12:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59051 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59051 ']' 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59051 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59051 00:05:04.636 killing process with pid 59051 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59051' 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59051 00:05:04.636 12:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59051 00:05:04.636 12:29:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.636 12:29:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.636 00:05:04.636 real 0m7.135s 00:05:04.636 user 0m6.875s 00:05:04.636 sys 0m0.696s 00:05:04.636 12:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.636 12:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.637 ************************************ 00:05:04.637 END TEST skip_rpc_with_json 00:05:04.637 ************************************ 00:05:04.637 12:29:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.637 12:29:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.637 12:29:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.637 12:29:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.637 12:29:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.637 ************************************ 00:05:04.637 START TEST skip_rpc_with_delay 00:05:04.637 ************************************ 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:04.637 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.895 [2024-07-15 12:29:37.361060] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:04.895 [2024-07-15 12:29:37.361183] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:04.895 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:04.895 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:04.895 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:04.895 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:04.895 00:05:04.895 real 0m0.089s 00:05:04.895 user 0m0.055s 00:05:04.895 sys 0m0.034s 00:05:04.895 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.895 ************************************ 00:05:04.895 END TEST skip_rpc_with_delay 00:05:04.895 ************************************ 00:05:04.895 12:29:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:04.895 12:29:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.895 12:29:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:04.895 12:29:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:04.895 12:29:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:04.895 12:29:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.895 12:29:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.895 12:29:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.895 ************************************ 00:05:04.895 START TEST exit_on_failed_rpc_init 00:05:04.895 ************************************ 00:05:04.895 12:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:04.895 12:29:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59166 00:05:04.895 12:29:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.895 12:29:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59166 00:05:04.895 12:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59166 ']' 00:05:04.895 12:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.895 12:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.895 12:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.896 12:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.896 12:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.896 [2024-07-15 12:29:37.502904] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:04.896 [2024-07-15 12:29:37.503281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59166 ] 00:05:05.154 [2024-07-15 12:29:37.644894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.154 [2024-07-15 12:29:37.774975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.414 [2024-07-15 12:29:37.833898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:05.981 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.981 [2024-07-15 12:29:38.583317] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:05.981 [2024-07-15 12:29:38.583409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59184 ] 00:05:06.241 [2024-07-15 12:29:38.721250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.241 [2024-07-15 12:29:38.852231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.241 [2024-07-15 12:29:38.852615] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.241 [2024-07-15 12:29:38.852886] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.241 [2024-07-15 12:29:38.853080] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59166 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59166 ']' 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59166 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59166 00:05:06.500 killing process with pid 59166 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59166' 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59166 00:05:06.500 12:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59166 00:05:06.758 00:05:06.758 real 0m1.943s 00:05:06.758 user 0m2.290s 00:05:06.758 sys 0m0.457s 00:05:06.758 12:29:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.758 ************************************ 00:05:06.758 END TEST exit_on_failed_rpc_init 00:05:06.758 ************************************ 00:05:06.758 12:29:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.759 12:29:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.759 12:29:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.759 00:05:06.759 real 0m14.898s 00:05:06.759 user 0m14.371s 00:05:06.759 sys 0m1.658s 00:05:06.759 12:29:39 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.759 ************************************ 00:05:06.759 END TEST skip_rpc 00:05:06.759 ************************************ 00:05:06.759 12:29:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.016 12:29:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.016 12:29:39 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:07.016 12:29:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.016 12:29:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.016 12:29:39 -- common/autotest_common.sh@10 -- # set +x 00:05:07.016 ************************************ 00:05:07.016 START TEST rpc_client 00:05:07.016 ************************************ 00:05:07.016 12:29:39 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:07.016 * Looking for test storage... 00:05:07.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:07.016 12:29:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:07.016 OK 00:05:07.016 12:29:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:07.016 00:05:07.016 real 0m0.103s 00:05:07.016 user 0m0.044s 00:05:07.016 sys 0m0.065s 00:05:07.016 ************************************ 00:05:07.016 END TEST rpc_client 00:05:07.016 ************************************ 00:05:07.016 12:29:39 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.016 12:29:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:07.016 12:29:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.016 12:29:39 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:07.016 12:29:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.016 12:29:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.016 12:29:39 -- common/autotest_common.sh@10 -- # set +x 00:05:07.016 ************************************ 00:05:07.016 START TEST json_config 00:05:07.016 ************************************ 00:05:07.016 12:29:39 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:07.016 12:29:39 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:07.016 12:29:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:07.016 12:29:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.016 12:29:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.017 12:29:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.017 12:29:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.017 12:29:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.017 12:29:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.017 12:29:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.017 12:29:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.017 12:29:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.017 12:29:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:07.275 12:29:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.275 12:29:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.275 12:29:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.275 12:29:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.275 12:29:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.275 12:29:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.275 12:29:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:07.275 12:29:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@47 -- # : 0 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:07.275 12:29:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:07.275 INFO: JSON configuration test init 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.275 Waiting for target to run... 00:05:07.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.275 12:29:39 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:07.275 12:29:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.275 12:29:39 json_config -- json_config/common.sh@10 -- # shift 00:05:07.275 12:29:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.275 12:29:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.275 12:29:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.275 12:29:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.275 12:29:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.275 12:29:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59302 00:05:07.275 12:29:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.275 12:29:39 json_config -- json_config/common.sh@25 -- # waitforlisten 59302 /var/tmp/spdk_tgt.sock 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@829 -- # '[' -z 59302 ']' 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.275 12:29:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.275 12:29:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.276 [2024-07-15 12:29:39.789788] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:07.276 [2024-07-15 12:29:39.790427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:05:07.533 [2024-07-15 12:29:40.208342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.791 [2024-07-15 12:29:40.298618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.358 12:29:40 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.358 12:29:40 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:08.358 12:29:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.358 00:05:08.358 12:29:40 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:08.358 12:29:40 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:08.358 12:29:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.358 12:29:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.358 12:29:40 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:08.358 12:29:40 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:08.358 12:29:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:08.358 12:29:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.358 12:29:40 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:08.358 12:29:40 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:08.358 12:29:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:08.616 [2024-07-15 12:29:41.111298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:08.876 12:29:41 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:08.876 12:29:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:08.876 12:29:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.876 12:29:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.876 12:29:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:08.876 12:29:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:08.876 12:29:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:08.876 12:29:41 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:08.876 12:29:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:08.876 12:29:41 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:09.135 12:29:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.135 12:29:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:09.135 12:29:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.135 12:29:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:09.135 12:29:41 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.135 12:29:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.393 MallocForNvmf0 00:05:09.393 12:29:41 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.393 12:29:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.652 MallocForNvmf1 00:05:09.652 12:29:42 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.652 12:29:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.910 [2024-07-15 12:29:42.499032] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.910 12:29:42 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.910 12:29:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.169 12:29:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.169 12:29:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.427 12:29:43 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.427 12:29:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.686 12:29:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.686 12:29:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.944 [2024-07-15 12:29:43.612078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.202 12:29:43 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:11.202 12:29:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.202 12:29:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.202 12:29:43 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:11.202 12:29:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.202 12:29:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.202 12:29:43 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:11.202 12:29:43 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.202 12:29:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.478 MallocBdevForConfigChangeCheck 00:05:11.478 12:29:43 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:11.478 12:29:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.478 12:29:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.478 12:29:44 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:11.478 12:29:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.061 INFO: shutting down applications... 00:05:12.061 12:29:44 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:12.061 12:29:44 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:12.061 12:29:44 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:12.061 12:29:44 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:12.061 12:29:44 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:12.320 Calling clear_iscsi_subsystem 00:05:12.320 Calling clear_nvmf_subsystem 00:05:12.320 Calling clear_nbd_subsystem 00:05:12.320 Calling clear_ublk_subsystem 00:05:12.320 Calling clear_vhost_blk_subsystem 00:05:12.320 Calling clear_vhost_scsi_subsystem 00:05:12.320 Calling clear_bdev_subsystem 00:05:12.320 12:29:44 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:12.320 12:29:44 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:12.320 12:29:44 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:12.320 12:29:44 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.320 12:29:44 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:12.320 12:29:44 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.579 12:29:45 json_config -- json_config/json_config.sh@345 -- # break 00:05:12.579 12:29:45 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:12.579 12:29:45 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:12.579 12:29:45 json_config -- json_config/common.sh@31 -- # local app=target 00:05:12.579 12:29:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.579 12:29:45 json_config -- json_config/common.sh@35 -- # [[ -n 59302 ]] 00:05:12.579 12:29:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59302 00:05:12.579 12:29:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.579 12:29:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.579 12:29:45 json_config -- json_config/common.sh@41 -- # kill -0 59302 00:05:12.579 12:29:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.146 12:29:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.146 12:29:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.146 12:29:45 json_config -- json_config/common.sh@41 -- # kill -0 59302 00:05:13.146 12:29:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.146 12:29:45 json_config -- json_config/common.sh@43 -- # break 00:05:13.146 12:29:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.146 12:29:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.146 SPDK target shutdown done 00:05:13.146 12:29:45 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:13.146 INFO: relaunching applications... 00:05:13.146 12:29:45 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.146 12:29:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:13.146 12:29:45 json_config -- json_config/common.sh@10 -- # shift 00:05:13.146 12:29:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.146 12:29:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.146 12:29:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.146 12:29:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.146 12:29:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.146 Waiting for target to run... 00:05:13.146 12:29:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59498 00:05:13.146 12:29:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.146 12:29:45 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.146 12:29:45 json_config -- json_config/common.sh@25 -- # waitforlisten 59498 /var/tmp/spdk_tgt.sock 00:05:13.146 12:29:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 59498 ']' 00:05:13.146 12:29:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.146 12:29:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.146 12:29:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.146 12:29:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.146 12:29:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.146 [2024-07-15 12:29:45.737409] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:13.146 [2024-07-15 12:29:45.737513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59498 ] 00:05:13.715 [2024-07-15 12:29:46.147292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.715 [2024-07-15 12:29:46.239584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.715 [2024-07-15 12:29:46.365863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:13.974 [2024-07-15 12:29:46.577083] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.974 [2024-07-15 12:29:46.609151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.232 00:05:14.232 INFO: Checking if target configuration is the same... 00:05:14.232 12:29:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.232 12:29:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:14.232 12:29:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.232 12:29:46 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:14.232 12:29:46 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.232 12:29:46 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.232 12:29:46 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:14.232 12:29:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.232 + '[' 2 -ne 2 ']' 00:05:14.232 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:14.232 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:14.232 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:14.232 +++ basename /dev/fd/62 00:05:14.232 ++ mktemp /tmp/62.XXX 00:05:14.232 + tmp_file_1=/tmp/62.axq 00:05:14.232 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.232 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.232 + tmp_file_2=/tmp/spdk_tgt_config.json.D6G 00:05:14.233 + ret=0 00:05:14.233 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.799 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.799 + diff -u /tmp/62.axq /tmp/spdk_tgt_config.json.D6G 00:05:14.799 INFO: JSON config files are the same 00:05:14.799 + echo 'INFO: JSON config files are the same' 00:05:14.799 + rm /tmp/62.axq /tmp/spdk_tgt_config.json.D6G 00:05:14.799 + exit 0 00:05:14.799 12:29:47 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:14.799 INFO: changing configuration and checking if this can be detected... 00:05:14.800 12:29:47 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.800 12:29:47 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.800 12:29:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.058 12:29:47 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:15.058 12:29:47 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:15.058 12:29:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.058 + '[' 2 -ne 2 ']' 00:05:15.058 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:15.058 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:15.058 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:15.058 +++ basename /dev/fd/62 00:05:15.058 ++ mktemp /tmp/62.XXX 00:05:15.058 + tmp_file_1=/tmp/62.yVA 00:05:15.058 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:15.058 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:15.058 + tmp_file_2=/tmp/spdk_tgt_config.json.WIK 00:05:15.058 + ret=0 00:05:15.058 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:15.319 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:15.576 + diff -u /tmp/62.yVA /tmp/spdk_tgt_config.json.WIK 00:05:15.576 + ret=1 00:05:15.576 + echo '=== Start of file: /tmp/62.yVA ===' 00:05:15.576 + cat /tmp/62.yVA 00:05:15.576 + echo '=== End of file: /tmp/62.yVA ===' 00:05:15.576 + echo '' 00:05:15.576 + echo '=== Start of file: /tmp/spdk_tgt_config.json.WIK ===' 00:05:15.576 + cat /tmp/spdk_tgt_config.json.WIK 00:05:15.576 + echo '=== End of file: /tmp/spdk_tgt_config.json.WIK ===' 00:05:15.576 + echo '' 00:05:15.576 + rm /tmp/62.yVA /tmp/spdk_tgt_config.json.WIK 00:05:15.576 + exit 1 00:05:15.576 INFO: configuration change detected. 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@317 -- # [[ -n 59498 ]] 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.576 12:29:48 json_config -- json_config/json_config.sh@323 -- # killprocess 59498 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@948 -- # '[' -z 59498 ']' 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@952 -- # kill -0 59498 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@953 -- # uname 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59498 00:05:15.576 killing process with pid 59498 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59498' 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@967 -- # kill 59498 00:05:15.576 12:29:48 json_config -- common/autotest_common.sh@972 -- # wait 59498 00:05:15.833 12:29:48 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:15.833 12:29:48 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:15.833 12:29:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.833 12:29:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.833 12:29:48 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:15.833 12:29:48 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:15.833 INFO: Success 00:05:15.833 ************************************ 00:05:15.833 END TEST json_config 00:05:15.833 ************************************ 00:05:15.833 00:05:15.833 real 0m8.777s 00:05:15.833 user 0m12.662s 00:05:15.833 sys 0m1.894s 00:05:15.833 12:29:48 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.833 12:29:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.833 12:29:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.833 12:29:48 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:15.833 12:29:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.833 12:29:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.833 12:29:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.833 ************************************ 00:05:15.833 START TEST json_config_extra_key 00:05:15.833 ************************************ 00:05:15.833 12:29:48 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:16.091 12:29:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.091 12:29:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.091 12:29:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.091 12:29:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.091 12:29:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.091 12:29:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.091 12:29:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:16.091 12:29:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:16.091 12:29:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.091 INFO: launching applications... 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:16.091 12:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:16.091 12:29:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:16.091 12:29:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:16.091 12:29:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.091 12:29:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.091 12:29:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.091 12:29:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.091 12:29:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.091 12:29:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59643 00:05:16.091 12:29:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:16.092 Waiting for target to run... 00:05:16.092 12:29:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.092 12:29:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59643 /var/tmp/spdk_tgt.sock 00:05:16.092 12:29:48 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59643 ']' 00:05:16.092 12:29:48 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.092 12:29:48 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.092 12:29:48 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.092 12:29:48 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.092 12:29:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.092 [2024-07-15 12:29:48.621724] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:16.092 [2024-07-15 12:29:48.621888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59643 ] 00:05:16.657 [2024-07-15 12:29:49.051175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.657 [2024-07-15 12:29:49.153156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.657 [2024-07-15 12:29:49.175300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:17.280 12:29:49 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.280 12:29:49 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:17.280 00:05:17.280 12:29:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:17.280 INFO: shutting down applications... 00:05:17.280 12:29:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:17.280 12:29:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:17.280 12:29:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:17.280 12:29:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:17.280 12:29:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59643 ]] 00:05:17.280 12:29:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59643 00:05:17.280 12:29:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:17.280 12:29:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.280 12:29:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59643 00:05:17.280 12:29:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.537 12:29:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.537 12:29:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.537 12:29:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59643 00:05:17.537 12:29:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.537 12:29:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:17.537 SPDK target shutdown done 00:05:17.537 12:29:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.538 12:29:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.538 Success 00:05:17.538 12:29:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:17.538 00:05:17.538 real 0m1.673s 00:05:17.538 user 0m1.590s 00:05:17.538 sys 0m0.442s 00:05:17.538 12:29:50 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.538 ************************************ 00:05:17.538 END TEST json_config_extra_key 00:05:17.538 ************************************ 00:05:17.538 12:29:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.538 12:29:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.538 12:29:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.538 12:29:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.538 12:29:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.538 12:29:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.538 ************************************ 00:05:17.538 START TEST alias_rpc 00:05:17.538 ************************************ 00:05:17.538 12:29:50 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.795 * Looking for test storage... 00:05:17.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:17.795 12:29:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.795 12:29:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59703 00:05:17.795 12:29:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59703 00:05:17.795 12:29:50 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59703 ']' 00:05:17.795 12:29:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.796 12:29:50 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.796 12:29:50 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.796 12:29:50 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.796 12:29:50 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.796 12:29:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.796 [2024-07-15 12:29:50.325947] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:17.796 [2024-07-15 12:29:50.326068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59703 ] 00:05:17.796 [2024-07-15 12:29:50.464694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.054 [2024-07-15 12:29:50.585361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.054 [2024-07-15 12:29:50.638927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:18.618 12:29:51 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.618 12:29:51 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:18.618 12:29:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:18.876 12:29:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59703 00:05:18.876 12:29:51 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59703 ']' 00:05:18.876 12:29:51 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59703 00:05:18.876 12:29:51 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:18.876 12:29:51 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.876 12:29:51 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59703 00:05:19.135 12:29:51 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.135 killing process with pid 59703 00:05:19.135 12:29:51 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.135 12:29:51 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59703' 00:05:19.135 12:29:51 alias_rpc -- common/autotest_common.sh@967 -- # kill 59703 00:05:19.135 12:29:51 alias_rpc -- common/autotest_common.sh@972 -- # wait 59703 00:05:19.393 00:05:19.393 real 0m1.782s 00:05:19.393 user 0m1.975s 00:05:19.393 sys 0m0.453s 00:05:19.393 12:29:51 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.393 12:29:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.393 ************************************ 00:05:19.393 END TEST alias_rpc 00:05:19.393 ************************************ 00:05:19.393 12:29:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:19.393 12:29:52 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:19.393 12:29:52 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:19.393 12:29:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.393 12:29:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.393 12:29:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.393 ************************************ 00:05:19.393 START TEST spdkcli_tcp 00:05:19.393 ************************************ 00:05:19.393 12:29:52 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:19.651 * Looking for test storage... 00:05:19.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:19.651 12:29:52 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.651 12:29:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59779 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59779 00:05:19.651 12:29:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:19.651 12:29:52 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59779 ']' 00:05:19.651 12:29:52 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.651 12:29:52 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.651 12:29:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.651 12:29:52 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.651 12:29:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.651 [2024-07-15 12:29:52.153662] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:19.651 [2024-07-15 12:29:52.153785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59779 ] 00:05:19.651 [2024-07-15 12:29:52.288493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.909 [2024-07-15 12:29:52.407851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.909 [2024-07-15 12:29:52.407864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.909 [2024-07-15 12:29:52.461562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:20.844 12:29:53 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.844 12:29:53 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:20.844 12:29:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59796 00:05:20.844 12:29:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:20.844 12:29:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:20.844 [ 00:05:20.844 "bdev_malloc_delete", 00:05:20.844 "bdev_malloc_create", 00:05:20.844 "bdev_null_resize", 00:05:20.844 "bdev_null_delete", 00:05:20.844 "bdev_null_create", 00:05:20.844 "bdev_nvme_cuse_unregister", 00:05:20.844 "bdev_nvme_cuse_register", 00:05:20.844 "bdev_opal_new_user", 00:05:20.844 "bdev_opal_set_lock_state", 00:05:20.844 "bdev_opal_delete", 00:05:20.844 "bdev_opal_get_info", 00:05:20.844 "bdev_opal_create", 00:05:20.844 "bdev_nvme_opal_revert", 00:05:20.844 "bdev_nvme_opal_init", 00:05:20.844 "bdev_nvme_send_cmd", 00:05:20.844 "bdev_nvme_get_path_iostat", 00:05:20.844 "bdev_nvme_get_mdns_discovery_info", 00:05:20.844 "bdev_nvme_stop_mdns_discovery", 00:05:20.844 "bdev_nvme_start_mdns_discovery", 00:05:20.844 "bdev_nvme_set_multipath_policy", 00:05:20.844 "bdev_nvme_set_preferred_path", 00:05:20.844 "bdev_nvme_get_io_paths", 00:05:20.844 "bdev_nvme_remove_error_injection", 00:05:20.844 "bdev_nvme_add_error_injection", 00:05:20.844 "bdev_nvme_get_discovery_info", 00:05:20.844 "bdev_nvme_stop_discovery", 00:05:20.844 "bdev_nvme_start_discovery", 00:05:20.844 "bdev_nvme_get_controller_health_info", 00:05:20.844 "bdev_nvme_disable_controller", 00:05:20.844 "bdev_nvme_enable_controller", 00:05:20.844 "bdev_nvme_reset_controller", 00:05:20.844 "bdev_nvme_get_transport_statistics", 00:05:20.844 "bdev_nvme_apply_firmware", 00:05:20.844 "bdev_nvme_detach_controller", 00:05:20.844 "bdev_nvme_get_controllers", 00:05:20.844 "bdev_nvme_attach_controller", 00:05:20.844 "bdev_nvme_set_hotplug", 00:05:20.844 "bdev_nvme_set_options", 00:05:20.844 "bdev_passthru_delete", 00:05:20.844 "bdev_passthru_create", 00:05:20.844 "bdev_lvol_set_parent_bdev", 00:05:20.844 "bdev_lvol_set_parent", 00:05:20.844 "bdev_lvol_check_shallow_copy", 00:05:20.844 "bdev_lvol_start_shallow_copy", 00:05:20.844 "bdev_lvol_grow_lvstore", 00:05:20.844 "bdev_lvol_get_lvols", 00:05:20.844 "bdev_lvol_get_lvstores", 00:05:20.844 "bdev_lvol_delete", 00:05:20.844 "bdev_lvol_set_read_only", 00:05:20.844 "bdev_lvol_resize", 00:05:20.844 "bdev_lvol_decouple_parent", 00:05:20.844 "bdev_lvol_inflate", 00:05:20.844 "bdev_lvol_rename", 00:05:20.844 "bdev_lvol_clone_bdev", 00:05:20.844 "bdev_lvol_clone", 00:05:20.844 "bdev_lvol_snapshot", 00:05:20.844 "bdev_lvol_create", 00:05:20.844 "bdev_lvol_delete_lvstore", 00:05:20.844 "bdev_lvol_rename_lvstore", 00:05:20.844 "bdev_lvol_create_lvstore", 00:05:20.844 "bdev_raid_set_options", 00:05:20.844 "bdev_raid_remove_base_bdev", 00:05:20.844 "bdev_raid_add_base_bdev", 00:05:20.844 "bdev_raid_delete", 00:05:20.844 "bdev_raid_create", 00:05:20.844 "bdev_raid_get_bdevs", 00:05:20.844 "bdev_error_inject_error", 00:05:20.844 "bdev_error_delete", 00:05:20.844 "bdev_error_create", 00:05:20.844 "bdev_split_delete", 00:05:20.844 "bdev_split_create", 00:05:20.844 "bdev_delay_delete", 00:05:20.844 "bdev_delay_create", 00:05:20.844 "bdev_delay_update_latency", 00:05:20.844 "bdev_zone_block_delete", 00:05:20.844 "bdev_zone_block_create", 00:05:20.844 "blobfs_create", 00:05:20.844 "blobfs_detect", 00:05:20.844 "blobfs_set_cache_size", 00:05:20.844 "bdev_aio_delete", 00:05:20.844 "bdev_aio_rescan", 00:05:20.845 "bdev_aio_create", 00:05:20.845 "bdev_ftl_set_property", 00:05:20.845 "bdev_ftl_get_properties", 00:05:20.845 "bdev_ftl_get_stats", 00:05:20.845 "bdev_ftl_unmap", 00:05:20.845 "bdev_ftl_unload", 00:05:20.845 "bdev_ftl_delete", 00:05:20.845 "bdev_ftl_load", 00:05:20.845 "bdev_ftl_create", 00:05:20.845 "bdev_virtio_attach_controller", 00:05:20.845 "bdev_virtio_scsi_get_devices", 00:05:20.845 "bdev_virtio_detach_controller", 00:05:20.845 "bdev_virtio_blk_set_hotplug", 00:05:20.845 "bdev_iscsi_delete", 00:05:20.845 "bdev_iscsi_create", 00:05:20.845 "bdev_iscsi_set_options", 00:05:20.845 "bdev_uring_delete", 00:05:20.845 "bdev_uring_rescan", 00:05:20.845 "bdev_uring_create", 00:05:20.845 "accel_error_inject_error", 00:05:20.845 "ioat_scan_accel_module", 00:05:20.845 "dsa_scan_accel_module", 00:05:20.845 "iaa_scan_accel_module", 00:05:20.845 "keyring_file_remove_key", 00:05:20.845 "keyring_file_add_key", 00:05:20.845 "keyring_linux_set_options", 00:05:20.845 "iscsi_get_histogram", 00:05:20.845 "iscsi_enable_histogram", 00:05:20.845 "iscsi_set_options", 00:05:20.845 "iscsi_get_auth_groups", 00:05:20.845 "iscsi_auth_group_remove_secret", 00:05:20.845 "iscsi_auth_group_add_secret", 00:05:20.845 "iscsi_delete_auth_group", 00:05:20.845 "iscsi_create_auth_group", 00:05:20.845 "iscsi_set_discovery_auth", 00:05:20.845 "iscsi_get_options", 00:05:20.845 "iscsi_target_node_request_logout", 00:05:20.845 "iscsi_target_node_set_redirect", 00:05:20.845 "iscsi_target_node_set_auth", 00:05:20.845 "iscsi_target_node_add_lun", 00:05:20.845 "iscsi_get_stats", 00:05:20.845 "iscsi_get_connections", 00:05:20.845 "iscsi_portal_group_set_auth", 00:05:20.845 "iscsi_start_portal_group", 00:05:20.845 "iscsi_delete_portal_group", 00:05:20.845 "iscsi_create_portal_group", 00:05:20.845 "iscsi_get_portal_groups", 00:05:20.845 "iscsi_delete_target_node", 00:05:20.845 "iscsi_target_node_remove_pg_ig_maps", 00:05:20.845 "iscsi_target_node_add_pg_ig_maps", 00:05:20.845 "iscsi_create_target_node", 00:05:20.845 "iscsi_get_target_nodes", 00:05:20.845 "iscsi_delete_initiator_group", 00:05:20.845 "iscsi_initiator_group_remove_initiators", 00:05:20.845 "iscsi_initiator_group_add_initiators", 00:05:20.845 "iscsi_create_initiator_group", 00:05:20.845 "iscsi_get_initiator_groups", 00:05:20.845 "nvmf_set_crdt", 00:05:20.845 "nvmf_set_config", 00:05:20.845 "nvmf_set_max_subsystems", 00:05:20.845 "nvmf_stop_mdns_prr", 00:05:20.845 "nvmf_publish_mdns_prr", 00:05:20.845 "nvmf_subsystem_get_listeners", 00:05:20.845 "nvmf_subsystem_get_qpairs", 00:05:20.845 "nvmf_subsystem_get_controllers", 00:05:20.845 "nvmf_get_stats", 00:05:20.845 "nvmf_get_transports", 00:05:20.845 "nvmf_create_transport", 00:05:20.845 "nvmf_get_targets", 00:05:20.845 "nvmf_delete_target", 00:05:20.845 "nvmf_create_target", 00:05:20.845 "nvmf_subsystem_allow_any_host", 00:05:20.845 "nvmf_subsystem_remove_host", 00:05:20.845 "nvmf_subsystem_add_host", 00:05:20.845 "nvmf_ns_remove_host", 00:05:20.845 "nvmf_ns_add_host", 00:05:20.845 "nvmf_subsystem_remove_ns", 00:05:20.845 "nvmf_subsystem_add_ns", 00:05:20.845 "nvmf_subsystem_listener_set_ana_state", 00:05:20.845 "nvmf_discovery_get_referrals", 00:05:20.845 "nvmf_discovery_remove_referral", 00:05:20.845 "nvmf_discovery_add_referral", 00:05:20.845 "nvmf_subsystem_remove_listener", 00:05:20.845 "nvmf_subsystem_add_listener", 00:05:20.845 "nvmf_delete_subsystem", 00:05:20.845 "nvmf_create_subsystem", 00:05:20.845 "nvmf_get_subsystems", 00:05:20.845 "env_dpdk_get_mem_stats", 00:05:20.845 "nbd_get_disks", 00:05:20.845 "nbd_stop_disk", 00:05:20.845 "nbd_start_disk", 00:05:20.845 "ublk_recover_disk", 00:05:20.845 "ublk_get_disks", 00:05:20.845 "ublk_stop_disk", 00:05:20.845 "ublk_start_disk", 00:05:20.845 "ublk_destroy_target", 00:05:20.845 "ublk_create_target", 00:05:20.845 "virtio_blk_create_transport", 00:05:20.845 "virtio_blk_get_transports", 00:05:20.845 "vhost_controller_set_coalescing", 00:05:20.845 "vhost_get_controllers", 00:05:20.845 "vhost_delete_controller", 00:05:20.845 "vhost_create_blk_controller", 00:05:20.845 "vhost_scsi_controller_remove_target", 00:05:20.845 "vhost_scsi_controller_add_target", 00:05:20.845 "vhost_start_scsi_controller", 00:05:20.845 "vhost_create_scsi_controller", 00:05:20.845 "thread_set_cpumask", 00:05:20.845 "framework_get_governor", 00:05:20.845 "framework_get_scheduler", 00:05:20.845 "framework_set_scheduler", 00:05:20.845 "framework_get_reactors", 00:05:20.845 "thread_get_io_channels", 00:05:20.845 "thread_get_pollers", 00:05:20.845 "thread_get_stats", 00:05:20.845 "framework_monitor_context_switch", 00:05:20.845 "spdk_kill_instance", 00:05:20.845 "log_enable_timestamps", 00:05:20.845 "log_get_flags", 00:05:20.845 "log_clear_flag", 00:05:20.845 "log_set_flag", 00:05:20.845 "log_get_level", 00:05:20.845 "log_set_level", 00:05:20.845 "log_get_print_level", 00:05:20.845 "log_set_print_level", 00:05:20.845 "framework_enable_cpumask_locks", 00:05:20.845 "framework_disable_cpumask_locks", 00:05:20.845 "framework_wait_init", 00:05:20.845 "framework_start_init", 00:05:20.845 "scsi_get_devices", 00:05:20.845 "bdev_get_histogram", 00:05:20.845 "bdev_enable_histogram", 00:05:20.845 "bdev_set_qos_limit", 00:05:20.845 "bdev_set_qd_sampling_period", 00:05:20.845 "bdev_get_bdevs", 00:05:20.845 "bdev_reset_iostat", 00:05:20.845 "bdev_get_iostat", 00:05:20.845 "bdev_examine", 00:05:20.845 "bdev_wait_for_examine", 00:05:20.845 "bdev_set_options", 00:05:20.845 "notify_get_notifications", 00:05:20.845 "notify_get_types", 00:05:20.845 "accel_get_stats", 00:05:20.845 "accel_set_options", 00:05:20.845 "accel_set_driver", 00:05:20.845 "accel_crypto_key_destroy", 00:05:20.845 "accel_crypto_keys_get", 00:05:20.845 "accel_crypto_key_create", 00:05:20.845 "accel_assign_opc", 00:05:20.845 "accel_get_module_info", 00:05:20.845 "accel_get_opc_assignments", 00:05:20.845 "vmd_rescan", 00:05:20.845 "vmd_remove_device", 00:05:20.845 "vmd_enable", 00:05:20.845 "sock_get_default_impl", 00:05:20.845 "sock_set_default_impl", 00:05:20.845 "sock_impl_set_options", 00:05:20.845 "sock_impl_get_options", 00:05:20.845 "iobuf_get_stats", 00:05:20.845 "iobuf_set_options", 00:05:20.845 "framework_get_pci_devices", 00:05:20.845 "framework_get_config", 00:05:20.845 "framework_get_subsystems", 00:05:20.845 "trace_get_info", 00:05:20.845 "trace_get_tpoint_group_mask", 00:05:20.845 "trace_disable_tpoint_group", 00:05:20.845 "trace_enable_tpoint_group", 00:05:20.845 "trace_clear_tpoint_mask", 00:05:20.845 "trace_set_tpoint_mask", 00:05:20.845 "keyring_get_keys", 00:05:20.845 "spdk_get_version", 00:05:20.845 "rpc_get_methods" 00:05:20.845 ] 00:05:20.845 12:29:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.845 12:29:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:20.845 12:29:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59779 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59779 ']' 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59779 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59779 00:05:20.845 killing process with pid 59779 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59779' 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59779 00:05:20.845 12:29:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59779 00:05:21.410 ************************************ 00:05:21.410 END TEST spdkcli_tcp 00:05:21.410 ************************************ 00:05:21.410 00:05:21.410 real 0m1.860s 00:05:21.410 user 0m3.502s 00:05:21.410 sys 0m0.458s 00:05:21.410 12:29:53 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.410 12:29:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.410 12:29:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.410 12:29:53 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.410 12:29:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.410 12:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.410 12:29:53 -- common/autotest_common.sh@10 -- # set +x 00:05:21.410 ************************************ 00:05:21.410 START TEST dpdk_mem_utility 00:05:21.410 ************************************ 00:05:21.410 12:29:53 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.410 * Looking for test storage... 00:05:21.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:21.410 12:29:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:21.410 12:29:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59870 00:05:21.410 12:29:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.410 12:29:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59870 00:05:21.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.410 12:29:54 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59870 ']' 00:05:21.410 12:29:54 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.410 12:29:54 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.410 12:29:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.410 12:29:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.410 12:29:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.410 [2024-07-15 12:29:54.076534] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:21.410 [2024-07-15 12:29:54.077022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59870 ] 00:05:21.668 [2024-07-15 12:29:54.217986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.668 [2024-07-15 12:29:54.336508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.925 [2024-07-15 12:29:54.389843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.517 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.517 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:22.517 12:29:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:22.517 12:29:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:22.517 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.517 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.517 { 00:05:22.517 "filename": "/tmp/spdk_mem_dump.txt" 00:05:22.517 } 00:05:22.517 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.517 12:29:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:22.517 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:22.517 1 heaps totaling size 814.000000 MiB 00:05:22.517 size: 814.000000 MiB heap id: 0 00:05:22.517 end heaps---------- 00:05:22.517 8 mempools totaling size 598.116089 MiB 00:05:22.517 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:22.517 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:22.517 size: 84.521057 MiB name: bdev_io_59870 00:05:22.517 size: 51.011292 MiB name: evtpool_59870 00:05:22.517 size: 50.003479 MiB name: msgpool_59870 00:05:22.517 size: 21.763794 MiB name: PDU_Pool 00:05:22.517 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:22.517 size: 0.026123 MiB name: Session_Pool 00:05:22.517 end mempools------- 00:05:22.517 6 memzones totaling size 4.142822 MiB 00:05:22.517 size: 1.000366 MiB name: RG_ring_0_59870 00:05:22.517 size: 1.000366 MiB name: RG_ring_1_59870 00:05:22.517 size: 1.000366 MiB name: RG_ring_4_59870 00:05:22.517 size: 1.000366 MiB name: RG_ring_5_59870 00:05:22.517 size: 0.125366 MiB name: RG_ring_2_59870 00:05:22.517 size: 0.015991 MiB name: RG_ring_3_59870 00:05:22.517 end memzones------- 00:05:22.517 12:29:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:22.517 heap id: 0 total size: 814.000000 MiB number of busy elements: 299 number of free elements: 15 00:05:22.517 list of free elements. size: 12.472107 MiB 00:05:22.517 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:22.517 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:22.517 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:22.517 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:22.517 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:22.517 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:22.517 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:22.517 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:22.517 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:22.517 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:05:22.517 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:22.517 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:22.517 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:22.517 element at address: 0x200027e00000 with size: 0.395935 MiB 00:05:22.517 element at address: 0x200003a00000 with size: 0.348572 MiB 00:05:22.517 list of standard malloc elements. size: 199.265320 MiB 00:05:22.517 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:22.517 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:22.517 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:22.517 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:22.517 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:22.517 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:22.517 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:22.517 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:22.517 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:22.517 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:22.517 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:22.518 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:22.519 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:22.519 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:22.519 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:22.519 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e65680 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:22.519 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:22.519 list of memzone associated elements. size: 602.262573 MiB 00:05:22.519 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:22.519 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:22.519 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:22.519 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:22.519 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:22.519 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59870_0 00:05:22.519 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:22.519 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59870_0 00:05:22.519 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:22.519 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59870_0 00:05:22.519 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:22.519 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:22.519 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:22.519 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:22.519 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:22.519 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59870 00:05:22.519 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:22.519 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59870 00:05:22.519 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:22.519 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59870 00:05:22.519 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:22.519 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:22.519 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:22.519 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:22.519 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:22.519 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:22.519 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:22.519 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:22.519 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:22.519 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59870 00:05:22.519 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:22.519 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59870 00:05:22.519 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:22.519 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59870 00:05:22.519 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:22.519 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59870 00:05:22.519 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:22.519 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59870 00:05:22.519 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:22.519 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:22.519 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:22.519 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:22.519 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:22.519 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:22.519 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:22.519 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59870 00:05:22.519 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:22.519 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:22.519 element at address: 0x200027e65740 with size: 0.023743 MiB 00:05:22.519 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:22.519 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:22.519 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59870 00:05:22.519 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:05:22.519 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:22.519 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:22.519 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59870 00:05:22.519 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:22.519 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59870 00:05:22.519 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:05:22.519 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:22.519 12:29:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:22.519 12:29:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59870 00:05:22.519 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59870 ']' 00:05:22.519 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59870 00:05:22.519 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:22.519 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.519 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59870 00:05:22.778 killing process with pid 59870 00:05:22.778 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.778 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.778 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59870' 00:05:22.778 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59870 00:05:22.778 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59870 00:05:23.037 00:05:23.037 real 0m1.680s 00:05:23.037 user 0m1.834s 00:05:23.037 sys 0m0.413s 00:05:23.037 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.037 ************************************ 00:05:23.037 END TEST dpdk_mem_utility 00:05:23.037 ************************************ 00:05:23.037 12:29:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.037 12:29:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.037 12:29:55 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:23.037 12:29:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.037 12:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.037 12:29:55 -- common/autotest_common.sh@10 -- # set +x 00:05:23.037 ************************************ 00:05:23.037 START TEST event 00:05:23.037 ************************************ 00:05:23.037 12:29:55 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:23.037 * Looking for test storage... 00:05:23.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:23.037 12:29:55 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:23.037 12:29:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:23.037 12:29:55 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.037 12:29:55 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:23.037 12:29:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.037 12:29:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.296 ************************************ 00:05:23.296 START TEST event_perf 00:05:23.296 ************************************ 00:05:23.296 12:29:55 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.296 Running I/O for 1 seconds...[2024-07-15 12:29:55.739292] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:23.296 [2024-07-15 12:29:55.739588] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59942 ] 00:05:23.296 [2024-07-15 12:29:55.881957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.553 [2024-07-15 12:29:56.004651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.554 [2024-07-15 12:29:56.004777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.554 [2024-07-15 12:29:56.004885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.554 Running I/O for 1 seconds...[2024-07-15 12:29:56.005137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.489 00:05:24.489 lcore 0: 197304 00:05:24.489 lcore 1: 197305 00:05:24.489 lcore 2: 197306 00:05:24.489 lcore 3: 197303 00:05:24.489 done. 00:05:24.489 00:05:24.489 real 0m1.375s 00:05:24.489 user 0m4.177s 00:05:24.489 sys 0m0.068s 00:05:24.489 12:29:57 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.489 12:29:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.489 ************************************ 00:05:24.489 END TEST event_perf 00:05:24.489 ************************************ 00:05:24.489 12:29:57 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.489 12:29:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:24.489 12:29:57 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:24.489 12:29:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.489 12:29:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.489 ************************************ 00:05:24.489 START TEST event_reactor 00:05:24.489 ************************************ 00:05:24.489 12:29:57 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:24.490 [2024-07-15 12:29:57.158450] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:24.490 [2024-07-15 12:29:57.158550] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59980 ] 00:05:24.748 [2024-07-15 12:29:57.290296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.006 [2024-07-15 12:29:57.439494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.942 test_start 00:05:25.942 oneshot 00:05:25.942 tick 100 00:05:25.942 tick 100 00:05:25.942 tick 250 00:05:25.942 tick 100 00:05:25.942 tick 100 00:05:25.942 tick 100 00:05:25.942 tick 250 00:05:25.942 tick 500 00:05:25.942 tick 100 00:05:25.942 tick 100 00:05:25.942 tick 250 00:05:25.942 tick 100 00:05:25.942 tick 100 00:05:25.942 test_end 00:05:25.942 ************************************ 00:05:25.942 END TEST event_reactor 00:05:25.942 ************************************ 00:05:25.942 00:05:25.942 real 0m1.389s 00:05:25.942 user 0m1.219s 00:05:25.942 sys 0m0.061s 00:05:25.942 12:29:58 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.942 12:29:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:25.942 12:29:58 event -- common/autotest_common.sh@1142 -- # return 0 00:05:25.942 12:29:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:25.942 12:29:58 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:25.942 12:29:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.942 12:29:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.942 ************************************ 00:05:25.942 START TEST event_reactor_perf 00:05:25.942 ************************************ 00:05:25.942 12:29:58 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:25.942 [2024-07-15 12:29:58.590987] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:25.943 [2024-07-15 12:29:58.591150] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60016 ] 00:05:26.201 [2024-07-15 12:29:58.737460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.201 [2024-07-15 12:29:58.867905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.634 test_start 00:05:27.634 test_end 00:05:27.634 Performance: 366084 events per second 00:05:27.634 ************************************ 00:05:27.634 END TEST event_reactor_perf 00:05:27.634 ************************************ 00:05:27.634 00:05:27.634 real 0m1.384s 00:05:27.634 user 0m1.218s 00:05:27.634 sys 0m0.057s 00:05:27.634 12:29:59 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.634 12:29:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.634 12:29:59 event -- common/autotest_common.sh@1142 -- # return 0 00:05:27.634 12:29:59 event -- event/event.sh@49 -- # uname -s 00:05:27.634 12:29:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:27.634 12:29:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:27.634 12:29:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.634 12:29:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.634 12:29:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.634 ************************************ 00:05:27.634 START TEST event_scheduler 00:05:27.634 ************************************ 00:05:27.634 12:30:00 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:27.634 * Looking for test storage... 00:05:27.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:27.634 12:30:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:27.634 12:30:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60077 00:05:27.634 12:30:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:27.634 12:30:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.634 12:30:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60077 00:05:27.634 12:30:00 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60077 ']' 00:05:27.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.634 12:30:00 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.634 12:30:00 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.634 12:30:00 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.634 12:30:00 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.634 12:30:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.634 [2024-07-15 12:30:00.140354] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:27.634 [2024-07-15 12:30:00.140819] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60077 ] 00:05:27.634 [2024-07-15 12:30:00.282035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.893 [2024-07-15 12:30:00.416905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.893 [2024-07-15 12:30:00.416996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.893 [2024-07-15 12:30:00.417122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.893 [2024-07-15 12:30:00.417142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.462 12:30:01 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.462 12:30:01 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:28.462 12:30:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:28.462 12:30:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.462 12:30:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.462 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.462 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.462 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.462 POWER: Cannot set governor of lcore 0 to performance 00:05:28.462 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.462 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.462 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.462 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.462 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:28.462 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:28.462 POWER: Unable to set Power Management Environment for lcore 0 00:05:28.462 [2024-07-15 12:30:01.139857] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:28.462 [2024-07-15 12:30:01.139958] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:28.462 [2024-07-15 12:30:01.140040] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:28.462 [2024-07-15 12:30:01.140129] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:28.462 [2024-07-15 12:30:01.140577] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:28.462 [2024-07-15 12:30:01.140684] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:28.462 12:30:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.462 12:30:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:28.462 12:30:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.462 12:30:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.720 [2024-07-15 12:30:01.221401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:28.720 [2024-07-15 12:30:01.269604] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:28.720 12:30:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.720 12:30:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:28.720 12:30:01 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.720 12:30:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.720 12:30:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.720 ************************************ 00:05:28.720 START TEST scheduler_create_thread 00:05:28.720 ************************************ 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.720 2 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.720 3 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.720 4 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.720 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.720 5 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.721 6 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.721 7 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.721 8 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.721 9 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.721 10 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.721 12:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.623 12:30:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.623 12:30:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:30.623 12:30:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:30.623 12:30:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.623 12:30:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.558 ************************************ 00:05:31.558 END TEST scheduler_create_thread 00:05:31.558 ************************************ 00:05:31.558 12:30:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.558 00:05:31.558 real 0m2.614s 00:05:31.558 user 0m0.014s 00:05:31.558 sys 0m0.007s 00:05:31.558 12:30:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.558 12:30:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.558 12:30:03 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:31.558 12:30:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.558 12:30:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60077 00:05:31.558 12:30:03 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60077 ']' 00:05:31.558 12:30:03 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60077 00:05:31.558 12:30:03 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:31.558 12:30:03 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.558 12:30:03 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60077 00:05:31.558 killing process with pid 60077 00:05:31.558 12:30:03 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:31.559 12:30:03 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:31.559 12:30:03 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60077' 00:05:31.559 12:30:03 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60077 00:05:31.559 12:30:03 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60077 00:05:31.817 [2024-07-15 12:30:04.375781] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:32.076 ************************************ 00:05:32.076 END TEST event_scheduler 00:05:32.076 ************************************ 00:05:32.076 00:05:32.076 real 0m4.616s 00:05:32.076 user 0m8.571s 00:05:32.076 sys 0m0.390s 00:05:32.076 12:30:04 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.076 12:30:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.076 12:30:04 event -- common/autotest_common.sh@1142 -- # return 0 00:05:32.076 12:30:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:32.076 12:30:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:32.076 12:30:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.076 12:30:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.076 12:30:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.076 ************************************ 00:05:32.076 START TEST app_repeat 00:05:32.076 ************************************ 00:05:32.076 12:30:04 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:32.076 Process app_repeat pid: 60177 00:05:32.076 spdk_app_start Round 0 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60177 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60177' 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:32.076 12:30:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60177 /var/tmp/spdk-nbd.sock 00:05:32.076 12:30:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60177 ']' 00:05:32.076 12:30:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.076 12:30:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.076 12:30:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.076 12:30:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.076 12:30:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.076 [2024-07-15 12:30:04.708348] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:32.076 [2024-07-15 12:30:04.708449] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60177 ] 00:05:32.335 [2024-07-15 12:30:04.847244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.335 [2024-07-15 12:30:04.978876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.335 [2024-07-15 12:30:04.978889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.594 [2024-07-15 12:30:05.036798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:33.173 12:30:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.173 12:30:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:33.173 12:30:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.432 Malloc0 00:05:33.432 12:30:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.692 Malloc1 00:05:33.692 12:30:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.692 12:30:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.951 /dev/nbd0 00:05:33.951 12:30:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.951 12:30:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.951 12:30:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:33.951 12:30:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:33.951 12:30:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:33.951 12:30:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:33.951 12:30:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:33.951 12:30:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:33.951 12:30:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:33.951 12:30:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:33.951 12:30:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.951 1+0 records in 00:05:33.951 1+0 records out 00:05:33.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688189 s, 6.0 MB/s 00:05:33.952 12:30:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.952 12:30:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:33.952 12:30:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.952 12:30:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:33.952 12:30:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:33.952 12:30:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.952 12:30:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.952 12:30:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.211 /dev/nbd1 00:05:34.211 12:30:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.211 12:30:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.211 1+0 records in 00:05:34.211 1+0 records out 00:05:34.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378443 s, 10.8 MB/s 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:34.211 12:30:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:34.211 12:30:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.211 12:30:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.211 12:30:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.211 12:30:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.211 12:30:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.469 { 00:05:34.469 "nbd_device": "/dev/nbd0", 00:05:34.469 "bdev_name": "Malloc0" 00:05:34.469 }, 00:05:34.469 { 00:05:34.469 "nbd_device": "/dev/nbd1", 00:05:34.469 "bdev_name": "Malloc1" 00:05:34.469 } 00:05:34.469 ]' 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.469 { 00:05:34.469 "nbd_device": "/dev/nbd0", 00:05:34.469 "bdev_name": "Malloc0" 00:05:34.469 }, 00:05:34.469 { 00:05:34.469 "nbd_device": "/dev/nbd1", 00:05:34.469 "bdev_name": "Malloc1" 00:05:34.469 } 00:05:34.469 ]' 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.469 /dev/nbd1' 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.469 /dev/nbd1' 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.469 12:30:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.728 256+0 records in 00:05:34.728 256+0 records out 00:05:34.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105962 s, 99.0 MB/s 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.728 256+0 records in 00:05:34.728 256+0 records out 00:05:34.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283175 s, 37.0 MB/s 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.728 256+0 records in 00:05:34.728 256+0 records out 00:05:34.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280777 s, 37.3 MB/s 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.728 12:30:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.987 12:30:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.246 12:30:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.505 12:30:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.505 12:30:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.763 12:30:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.030 [2024-07-15 12:30:08.581354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.030 [2024-07-15 12:30:08.702710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.030 [2024-07-15 12:30:08.702720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.337 [2024-07-15 12:30:08.761154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.337 [2024-07-15 12:30:08.761251] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.337 [2024-07-15 12:30:08.761268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.870 spdk_app_start Round 1 00:05:38.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.870 12:30:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.870 12:30:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:38.870 12:30:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60177 /var/tmp/spdk-nbd.sock 00:05:38.870 12:30:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60177 ']' 00:05:38.870 12:30:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.870 12:30:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.870 12:30:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.870 12:30:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.870 12:30:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.130 12:30:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.130 12:30:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:39.130 12:30:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.388 Malloc0 00:05:39.388 12:30:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.647 Malloc1 00:05:39.647 12:30:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.647 12:30:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.905 /dev/nbd0 00:05:39.905 12:30:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.905 12:30:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.905 1+0 records in 00:05:39.905 1+0 records out 00:05:39.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403219 s, 10.2 MB/s 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.905 12:30:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:39.905 12:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.905 12:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.905 12:30:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.164 /dev/nbd1 00:05:40.164 12:30:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.164 12:30:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.164 1+0 records in 00:05:40.164 1+0 records out 00:05:40.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291708 s, 14.0 MB/s 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.164 12:30:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.164 12:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.164 12:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.164 12:30:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.164 12:30:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.164 12:30:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.731 { 00:05:40.731 "nbd_device": "/dev/nbd0", 00:05:40.731 "bdev_name": "Malloc0" 00:05:40.731 }, 00:05:40.731 { 00:05:40.731 "nbd_device": "/dev/nbd1", 00:05:40.731 "bdev_name": "Malloc1" 00:05:40.731 } 00:05:40.731 ]' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.731 { 00:05:40.731 "nbd_device": "/dev/nbd0", 00:05:40.731 "bdev_name": "Malloc0" 00:05:40.731 }, 00:05:40.731 { 00:05:40.731 "nbd_device": "/dev/nbd1", 00:05:40.731 "bdev_name": "Malloc1" 00:05:40.731 } 00:05:40.731 ]' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.731 /dev/nbd1' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.731 /dev/nbd1' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.731 256+0 records in 00:05:40.731 256+0 records out 00:05:40.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00712498 s, 147 MB/s 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.731 256+0 records in 00:05:40.731 256+0 records out 00:05:40.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233706 s, 44.9 MB/s 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.731 256+0 records in 00:05:40.731 256+0 records out 00:05:40.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267886 s, 39.1 MB/s 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.731 12:30:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.989 12:30:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.246 12:30:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.828 12:30:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.828 12:30:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.086 12:30:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.344 [2024-07-15 12:30:14.773605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.344 [2024-07-15 12:30:14.890855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.344 [2024-07-15 12:30:14.890864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.344 [2024-07-15 12:30:14.945402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.344 [2024-07-15 12:30:14.945491] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.344 [2024-07-15 12:30:14.945506] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.630 12:30:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.630 spdk_app_start Round 2 00:05:45.630 12:30:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.630 12:30:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60177 /var/tmp/spdk-nbd.sock 00:05:45.630 12:30:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60177 ']' 00:05:45.630 12:30:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.630 12:30:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.630 12:30:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.630 12:30:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.630 12:30:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.630 12:30:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.630 12:30:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:45.630 12:30:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.630 Malloc0 00:05:45.630 12:30:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.889 Malloc1 00:05:45.889 12:30:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.889 12:30:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.148 /dev/nbd0 00:05:46.148 12:30:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.148 12:30:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.148 1+0 records in 00:05:46.148 1+0 records out 00:05:46.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386604 s, 10.6 MB/s 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.148 12:30:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:46.148 12:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.148 12:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.148 12:30:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.407 /dev/nbd1 00:05:46.407 12:30:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.407 12:30:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.407 1+0 records in 00:05:46.407 1+0 records out 00:05:46.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321624 s, 12.7 MB/s 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.407 12:30:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:46.407 12:30:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.407 12:30:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.407 12:30:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:46.407 12:30:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.407 12:30:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.407 12:30:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.407 12:30:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.407 12:30:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.666 { 00:05:46.666 "nbd_device": "/dev/nbd0", 00:05:46.666 "bdev_name": "Malloc0" 00:05:46.666 }, 00:05:46.666 { 00:05:46.666 "nbd_device": "/dev/nbd1", 00:05:46.666 "bdev_name": "Malloc1" 00:05:46.666 } 00:05:46.666 ]' 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.666 { 00:05:46.666 "nbd_device": "/dev/nbd0", 00:05:46.666 "bdev_name": "Malloc0" 00:05:46.666 }, 00:05:46.666 { 00:05:46.666 "nbd_device": "/dev/nbd1", 00:05:46.666 "bdev_name": "Malloc1" 00:05:46.666 } 00:05:46.666 ]' 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.666 /dev/nbd1' 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.666 /dev/nbd1' 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.666 256+0 records in 00:05:46.666 256+0 records out 00:05:46.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00682138 s, 154 MB/s 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.666 256+0 records in 00:05:46.666 256+0 records out 00:05:46.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230771 s, 45.4 MB/s 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.666 12:30:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.925 256+0 records in 00:05:46.925 256+0 records out 00:05:46.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270678 s, 38.7 MB/s 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.925 12:30:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.184 12:30:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.443 12:30:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.765 12:30:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.765 12:30:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.024 12:30:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.282 [2024-07-15 12:30:20.730206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.282 [2024-07-15 12:30:20.845909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.282 [2024-07-15 12:30:20.845919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.282 [2024-07-15 12:30:20.899414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:48.282 [2024-07-15 12:30:20.899500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.282 [2024-07-15 12:30:20.899516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.571 12:30:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60177 /var/tmp/spdk-nbd.sock 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60177 ']' 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:51.571 12:30:23 event.app_repeat -- event/event.sh@39 -- # killprocess 60177 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60177 ']' 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60177 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60177 00:05:51.571 killing process with pid 60177 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60177' 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60177 00:05:51.571 12:30:23 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60177 00:05:51.571 spdk_app_start is called in Round 0. 00:05:51.571 Shutdown signal received, stop current app iteration 00:05:51.571 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:05:51.571 spdk_app_start is called in Round 1. 00:05:51.571 Shutdown signal received, stop current app iteration 00:05:51.571 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:05:51.571 spdk_app_start is called in Round 2. 00:05:51.571 Shutdown signal received, stop current app iteration 00:05:51.571 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:05:51.571 spdk_app_start is called in Round 3. 00:05:51.571 Shutdown signal received, stop current app iteration 00:05:51.571 12:30:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.571 12:30:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.571 00:05:51.571 real 0m19.368s 00:05:51.571 user 0m43.334s 00:05:51.571 sys 0m3.017s 00:05:51.571 12:30:24 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.571 ************************************ 00:05:51.571 END TEST app_repeat 00:05:51.571 ************************************ 00:05:51.571 12:30:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.571 12:30:24 event -- common/autotest_common.sh@1142 -- # return 0 00:05:51.571 12:30:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.571 12:30:24 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.571 12:30:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.571 12:30:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.571 12:30:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.571 ************************************ 00:05:51.571 START TEST cpu_locks 00:05:51.571 ************************************ 00:05:51.571 12:30:24 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.571 * Looking for test storage... 00:05:51.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.571 12:30:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.571 12:30:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.571 12:30:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.571 12:30:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.571 12:30:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.571 12:30:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.571 12:30:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.571 ************************************ 00:05:51.571 START TEST default_locks 00:05:51.571 ************************************ 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60615 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60615 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60615 ']' 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.571 12:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.571 [2024-07-15 12:30:24.234140] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:51.571 [2024-07-15 12:30:24.234233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60615 ] 00:05:51.830 [2024-07-15 12:30:24.369010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.830 [2024-07-15 12:30:24.485720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.088 [2024-07-15 12:30:24.540091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.655 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.655 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:52.655 12:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60615 00:05:52.655 12:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60615 00:05:52.655 12:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60615 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60615 ']' 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60615 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60615 00:05:52.914 killing process with pid 60615 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60615' 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60615 00:05:52.914 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60615 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60615 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60615 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:53.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.483 ERROR: process (pid: 60615) is no longer running 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60615 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60615 ']' 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.483 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60615) - No such process 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.483 ************************************ 00:05:53.483 END TEST default_locks 00:05:53.483 ************************************ 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.483 00:05:53.483 real 0m1.800s 00:05:53.483 user 0m1.882s 00:05:53.483 sys 0m0.552s 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.483 12:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.483 12:30:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:53.483 12:30:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.483 12:30:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.483 12:30:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.484 12:30:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.484 ************************************ 00:05:53.484 START TEST default_locks_via_rpc 00:05:53.484 ************************************ 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60662 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60662 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60662 ']' 00:05:53.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.484 12:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.484 [2024-07-15 12:30:26.095939] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:53.484 [2024-07-15 12:30:26.096030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60662 ] 00:05:53.742 [2024-07-15 12:30:26.231898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.742 [2024-07-15 12:30:26.348069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.742 [2024-07-15 12:30:26.401622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60662 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60662 00:05:54.680 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60662 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60662 ']' 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60662 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60662 00:05:54.939 killing process with pid 60662 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60662' 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60662 00:05:54.939 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60662 00:05:55.507 ************************************ 00:05:55.507 END TEST default_locks_via_rpc 00:05:55.507 ************************************ 00:05:55.507 00:05:55.507 real 0m1.892s 00:05:55.507 user 0m2.030s 00:05:55.507 sys 0m0.575s 00:05:55.507 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.507 12:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 12:30:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:55.507 12:30:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:55.507 12:30:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.507 12:30:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.507 12:30:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 ************************************ 00:05:55.507 START TEST non_locking_app_on_locked_coremask 00:05:55.507 ************************************ 00:05:55.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60707 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60707 /var/tmp/spdk.sock 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60707 ']' 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.507 12:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 [2024-07-15 12:30:28.044413] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:55.507 [2024-07-15 12:30:28.044518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60707 ] 00:05:55.507 [2024-07-15 12:30:28.183452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.767 [2024-07-15 12:30:28.300619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.767 [2024-07-15 12:30:28.353906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60723 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60723 /var/tmp/spdk2.sock 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60723 ']' 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.704 12:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.704 [2024-07-15 12:30:29.078358] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:56.704 [2024-07-15 12:30:29.078777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60723 ] 00:05:56.704 [2024-07-15 12:30:29.219999] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.704 [2024-07-15 12:30:29.220054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.982 [2024-07-15 12:30:29.490928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.982 [2024-07-15 12:30:29.639641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.583 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.584 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.584 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60707 00:05:57.584 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60707 00:05:57.584 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60707 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60707 ']' 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60707 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60707 00:05:58.521 killing process with pid 60707 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60707' 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60707 00:05:58.521 12:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60707 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60723 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60723 ']' 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60723 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60723 00:05:59.456 killing process with pid 60723 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60723' 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60723 00:05:59.456 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60723 00:06:00.021 ************************************ 00:06:00.021 END TEST non_locking_app_on_locked_coremask 00:06:00.021 ************************************ 00:06:00.021 00:06:00.021 real 0m4.691s 00:06:00.021 user 0m5.059s 00:06:00.021 sys 0m1.211s 00:06:00.021 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.021 12:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.279 12:30:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:00.279 12:30:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.279 12:30:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.279 12:30:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.279 12:30:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.279 ************************************ 00:06:00.279 START TEST locking_app_on_unlocked_coremask 00:06:00.279 ************************************ 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:00.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60801 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60801 /var/tmp/spdk.sock 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60801 ']' 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.279 12:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.279 [2024-07-15 12:30:32.793406] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:00.279 [2024-07-15 12:30:32.793808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60801 ] 00:06:00.279 [2024-07-15 12:30:32.932703] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.279 [2024-07-15 12:30:32.933127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.537 [2024-07-15 12:30:33.083343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.537 [2024-07-15 12:30:33.158402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60817 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60817 /var/tmp/spdk2.sock 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60817 ']' 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.103 12:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.103 [2024-07-15 12:30:33.764642] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:01.103 [2024-07-15 12:30:33.764789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60817 ] 00:06:01.362 [2024-07-15 12:30:33.910512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.620 [2024-07-15 12:30:34.207670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.882 [2024-07-15 12:30:34.356975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.450 12:30:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.450 12:30:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:02.450 12:30:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60817 00:06:02.450 12:30:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.450 12:30:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60817 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60801 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60801 ']' 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60801 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60801 00:06:03.018 killing process with pid 60801 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60801' 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60801 00:06:03.018 12:30:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60801 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60817 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60817 ']' 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60817 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60817 00:06:04.417 killing process with pid 60817 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60817' 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60817 00:06:04.417 12:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60817 00:06:04.675 00:06:04.675 real 0m4.521s 00:06:04.675 user 0m4.800s 00:06:04.675 sys 0m1.214s 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.675 ************************************ 00:06:04.675 END TEST locking_app_on_unlocked_coremask 00:06:04.675 ************************************ 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.675 12:30:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:04.675 12:30:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.675 12:30:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.675 12:30:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.675 12:30:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.675 ************************************ 00:06:04.675 START TEST locking_app_on_locked_coremask 00:06:04.675 ************************************ 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:04.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60890 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60890 /var/tmp/spdk.sock 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60890 ']' 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.675 12:30:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.675 [2024-07-15 12:30:37.353996] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:04.675 [2024-07-15 12:30:37.354102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60890 ] 00:06:04.933 [2024-07-15 12:30:37.488227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.191 [2024-07-15 12:30:37.635756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.191 [2024-07-15 12:30:37.708752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60906 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60906 /var/tmp/spdk2.sock 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60906 /var/tmp/spdk2.sock 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60906 /var/tmp/spdk2.sock 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60906 ']' 00:06:05.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.758 12:30:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.758 [2024-07-15 12:30:38.402858] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:05.758 [2024-07-15 12:30:38.402997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60906 ] 00:06:06.017 [2024-07-15 12:30:38.553436] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60890 has claimed it. 00:06:06.017 [2024-07-15 12:30:38.553543] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.582 ERROR: process (pid: 60906) is no longer running 00:06:06.582 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60906) - No such process 00:06:06.582 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.582 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:06.582 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:06.582 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.582 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.582 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.582 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60890 00:06:06.582 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60890 00:06:06.582 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.872 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60890 00:06:06.872 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60890 ']' 00:06:06.872 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60890 00:06:06.872 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:06.872 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.872 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60890 00:06:07.129 killing process with pid 60890 00:06:07.129 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.129 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.129 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60890' 00:06:07.129 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60890 00:06:07.129 12:30:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60890 00:06:07.695 00:06:07.695 real 0m2.816s 00:06:07.695 user 0m3.160s 00:06:07.695 sys 0m0.720s 00:06:07.695 12:30:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.695 12:30:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.695 ************************************ 00:06:07.695 END TEST locking_app_on_locked_coremask 00:06:07.695 ************************************ 00:06:07.695 12:30:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:07.695 12:30:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.695 12:30:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.695 12:30:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.695 12:30:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.695 ************************************ 00:06:07.695 START TEST locking_overlapped_coremask 00:06:07.695 ************************************ 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60951 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60951 /var/tmp/spdk.sock 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60951 ']' 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.695 12:30:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.695 [2024-07-15 12:30:40.217103] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:07.695 [2024-07-15 12:30:40.217205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60951 ] 00:06:07.695 [2024-07-15 12:30:40.352948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.953 [2024-07-15 12:30:40.507472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.953 [2024-07-15 12:30:40.507592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.953 [2024-07-15 12:30:40.507602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.953 [2024-07-15 12:30:40.584034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60969 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60969 /var/tmp/spdk2.sock 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60969 /var/tmp/spdk2.sock 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60969 /var/tmp/spdk2.sock 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60969 ']' 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.520 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.521 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.521 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.779 [2024-07-15 12:30:41.212038] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:08.779 [2024-07-15 12:30:41.212159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60969 ] 00:06:08.779 [2024-07-15 12:30:41.366112] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60951 has claimed it. 00:06:08.779 [2024-07-15 12:30:41.366197] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.346 ERROR: process (pid: 60969) is no longer running 00:06:09.346 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60969) - No such process 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60951 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60951 ']' 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60951 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60951 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60951' 00:06:09.346 killing process with pid 60951 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60951 00:06:09.346 12:30:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60951 00:06:09.925 00:06:09.925 real 0m2.289s 00:06:09.925 user 0m6.010s 00:06:09.925 sys 0m0.526s 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.925 ************************************ 00:06:09.925 END TEST locking_overlapped_coremask 00:06:09.925 ************************************ 00:06:09.925 12:30:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:09.925 12:30:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:09.925 12:30:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.925 12:30:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.925 12:30:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.925 ************************************ 00:06:09.925 START TEST locking_overlapped_coremask_via_rpc 00:06:09.925 ************************************ 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61015 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61015 /var/tmp/spdk.sock 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61015 ']' 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.925 12:30:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.925 [2024-07-15 12:30:42.560765] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:09.925 [2024-07-15 12:30:42.560873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61015 ] 00:06:10.208 [2024-07-15 12:30:42.695441] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.208 [2024-07-15 12:30:42.695518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.208 [2024-07-15 12:30:42.847606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.208 [2024-07-15 12:30:42.847778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.208 [2024-07-15 12:30:42.848068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.478 [2024-07-15 12:30:42.921920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61033 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61033 /var/tmp/spdk2.sock 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61033 ']' 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.058 12:30:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.058 [2024-07-15 12:30:43.569028] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:11.058 [2024-07-15 12:30:43.569393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61033 ] 00:06:11.058 [2024-07-15 12:30:43.712985] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.058 [2024-07-15 12:30:43.713050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.316 [2024-07-15 12:30:43.955511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.316 [2024-07-15 12:30:43.958823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.316 [2024-07-15 12:30:43.958824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.573 [2024-07-15 12:30:44.063914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.140 [2024-07-15 12:30:44.558924] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61015 has claimed it. 00:06:12.140 request: 00:06:12.140 { 00:06:12.140 "method": "framework_enable_cpumask_locks", 00:06:12.140 "req_id": 1 00:06:12.140 } 00:06:12.140 Got JSON-RPC error response 00:06:12.140 response: 00:06:12.140 { 00:06:12.140 "code": -32603, 00:06:12.140 "message": "Failed to claim CPU core: 2" 00:06:12.140 } 00:06:12.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61015 /var/tmp/spdk.sock 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61015 ']' 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.140 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.399 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.399 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.399 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61033 /var/tmp/spdk2.sock 00:06:12.399 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61033 ']' 00:06:12.399 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.400 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.400 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.400 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.400 12:30:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.671 ************************************ 00:06:12.671 END TEST locking_overlapped_coremask_via_rpc 00:06:12.671 ************************************ 00:06:12.671 12:30:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.671 12:30:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.671 12:30:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:12.671 12:30:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.671 12:30:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.671 12:30:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.671 00:06:12.671 real 0m2.685s 00:06:12.671 user 0m1.397s 00:06:12.671 sys 0m0.217s 00:06:12.672 12:30:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.672 12:30:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:12.672 12:30:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:12.672 12:30:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61015 ]] 00:06:12.672 12:30:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61015 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61015 ']' 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61015 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61015 00:06:12.672 killing process with pid 61015 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61015' 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61015 00:06:12.672 12:30:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61015 00:06:13.238 12:30:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61033 ]] 00:06:13.238 12:30:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61033 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61033 ']' 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61033 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61033 00:06:13.238 killing process with pid 61033 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61033' 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61033 00:06:13.238 12:30:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61033 00:06:13.808 12:30:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:13.808 12:30:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:13.808 12:30:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61015 ]] 00:06:13.808 12:30:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61015 00:06:13.808 12:30:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61015 ']' 00:06:13.808 12:30:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61015 00:06:13.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61015) - No such process 00:06:13.808 12:30:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61015 is not found' 00:06:13.808 Process with pid 61015 is not found 00:06:13.808 12:30:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61033 ]] 00:06:13.808 12:30:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61033 00:06:13.808 12:30:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61033 ']' 00:06:13.808 12:30:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61033 00:06:13.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61033) - No such process 00:06:13.808 Process with pid 61033 is not found 00:06:13.808 12:30:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61033 is not found' 00:06:13.808 12:30:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:13.808 ************************************ 00:06:13.808 END TEST cpu_locks 00:06:13.808 ************************************ 00:06:13.808 00:06:13.808 real 0m22.165s 00:06:13.808 user 0m37.607s 00:06:13.808 sys 0m5.922s 00:06:13.808 12:30:46 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.808 12:30:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.808 12:30:46 event -- common/autotest_common.sh@1142 -- # return 0 00:06:13.808 ************************************ 00:06:13.808 END TEST event 00:06:13.808 ************************************ 00:06:13.808 00:06:13.808 real 0m50.647s 00:06:13.808 user 1m36.246s 00:06:13.808 sys 0m9.733s 00:06:13.808 12:30:46 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.808 12:30:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.808 12:30:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.808 12:30:46 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:13.808 12:30:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.808 12:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.808 12:30:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.808 ************************************ 00:06:13.808 START TEST thread 00:06:13.808 ************************************ 00:06:13.808 12:30:46 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:13.808 * Looking for test storage... 00:06:13.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:13.808 12:30:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.808 12:30:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:13.808 12:30:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.808 12:30:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.808 ************************************ 00:06:13.808 START TEST thread_poller_perf 00:06:13.808 ************************************ 00:06:13.808 12:30:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.808 [2024-07-15 12:30:46.435259] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:13.808 [2024-07-15 12:30:46.435365] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61161 ] 00:06:14.067 [2024-07-15 12:30:46.571690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.067 [2024-07-15 12:30:46.695532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.067 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:15.494 ====================================== 00:06:15.494 busy:2206677210 (cyc) 00:06:15.494 total_run_count: 315000 00:06:15.494 tsc_hz: 2200000000 (cyc) 00:06:15.494 ====================================== 00:06:15.494 poller_cost: 7005 (cyc), 3184 (nsec) 00:06:15.494 00:06:15.494 real 0m1.375s 00:06:15.494 user 0m1.208s 00:06:15.494 sys 0m0.058s 00:06:15.494 12:30:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.494 12:30:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.494 ************************************ 00:06:15.494 END TEST thread_poller_perf 00:06:15.494 ************************************ 00:06:15.494 12:30:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:15.494 12:30:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.494 12:30:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:15.494 12:30:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.494 12:30:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.494 ************************************ 00:06:15.494 START TEST thread_poller_perf 00:06:15.494 ************************************ 00:06:15.494 12:30:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.494 [2024-07-15 12:30:47.867119] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:15.494 [2024-07-15 12:30:47.867264] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61191 ] 00:06:15.494 [2024-07-15 12:30:48.011947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.494 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:15.494 [2024-07-15 12:30:48.171410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.870 ====================================== 00:06:16.870 busy:2202561722 (cyc) 00:06:16.870 total_run_count: 4191000 00:06:16.870 tsc_hz: 2200000000 (cyc) 00:06:16.870 ====================================== 00:06:16.870 poller_cost: 525 (cyc), 238 (nsec) 00:06:16.870 ************************************ 00:06:16.870 END TEST thread_poller_perf 00:06:16.870 ************************************ 00:06:16.870 00:06:16.870 real 0m1.416s 00:06:16.870 user 0m1.238s 00:06:16.870 sys 0m0.067s 00:06:16.870 12:30:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.870 12:30:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.870 12:30:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:16.870 12:30:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:16.870 ************************************ 00:06:16.870 END TEST thread 00:06:16.870 ************************************ 00:06:16.870 00:06:16.870 real 0m2.975s 00:06:16.870 user 0m2.500s 00:06:16.870 sys 0m0.252s 00:06:16.870 12:30:49 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.870 12:30:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.870 12:30:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.870 12:30:49 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:16.870 12:30:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.870 12:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.870 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:06:16.870 ************************************ 00:06:16.870 START TEST accel 00:06:16.870 ************************************ 00:06:16.870 12:30:49 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:16.870 * Looking for test storage... 00:06:16.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:16.870 12:30:49 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:16.870 12:30:49 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:16.870 12:30:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.870 12:30:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61265 00:06:16.870 12:30:49 accel -- accel/accel.sh@63 -- # waitforlisten 61265 00:06:16.870 12:30:49 accel -- common/autotest_common.sh@829 -- # '[' -z 61265 ']' 00:06:16.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.870 12:30:49 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.870 12:30:49 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.870 12:30:49 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.870 12:30:49 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.870 12:30:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.870 12:30:49 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:16.870 12:30:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:16.870 12:30:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.870 12:30:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.870 12:30:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.870 12:30:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.870 12:30:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.870 12:30:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:16.870 12:30:49 accel -- accel/accel.sh@41 -- # jq -r . 00:06:16.870 [2024-07-15 12:30:49.506889] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:16.871 [2024-07-15 12:30:49.507272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61265 ] 00:06:17.130 [2024-07-15 12:30:49.639892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.130 [2024-07-15 12:30:49.760489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.390 [2024-07-15 12:30:49.816223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.957 12:30:50 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.957 12:30:50 accel -- common/autotest_common.sh@862 -- # return 0 00:06:17.957 12:30:50 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:17.957 12:30:50 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:17.957 12:30:50 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:17.957 12:30:50 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:17.957 12:30:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:17.957 12:30:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:17.957 12:30:50 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.957 12:30:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:17.957 12:30:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.957 12:30:50 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.957 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.957 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.957 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.958 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.958 12:30:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.958 12:30:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.958 12:30:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.958 12:30:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.958 12:30:50 accel -- accel/accel.sh@75 -- # killprocess 61265 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@948 -- # '[' -z 61265 ']' 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@952 -- # kill -0 61265 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@953 -- # uname 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61265 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61265' 00:06:17.958 killing process with pid 61265 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@967 -- # kill 61265 00:06:17.958 12:30:50 accel -- common/autotest_common.sh@972 -- # wait 61265 00:06:18.524 12:30:50 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:18.524 12:30:50 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:18.524 12:30:50 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:18.524 12:30:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.524 12:30:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.524 12:30:50 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:18.524 12:30:50 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:18.524 12:30:50 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:18.524 12:30:50 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.524 12:30:50 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.524 12:30:50 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.524 12:30:50 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.524 12:30:50 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.524 12:30:50 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:18.524 12:30:50 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:18.524 12:30:50 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.524 12:30:50 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:18.524 12:30:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.524 12:30:50 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:18.524 12:30:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.524 12:30:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.524 12:30:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.524 ************************************ 00:06:18.524 START TEST accel_missing_filename 00:06:18.524 ************************************ 00:06:18.524 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:18.524 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:18.524 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:18.524 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:18.524 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.524 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:18.524 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.524 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:18.524 12:30:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:18.524 12:30:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:18.525 12:30:51 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.525 12:30:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.525 12:30:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.525 12:30:51 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.525 12:30:51 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.525 12:30:51 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:18.525 12:30:51 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:18.525 [2024-07-15 12:30:51.024902] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:18.525 [2024-07-15 12:30:51.024993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61317 ] 00:06:18.525 [2024-07-15 12:30:51.159196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.784 [2024-07-15 12:30:51.280578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.784 [2024-07-15 12:30:51.335833] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.784 [2024-07-15 12:30:51.412210] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:19.042 A filename is required. 00:06:19.042 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:19.042 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.042 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:19.042 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:19.042 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:19.042 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.042 00:06:19.042 real 0m0.504s 00:06:19.042 user 0m0.406s 00:06:19.042 sys 0m0.122s 00:06:19.042 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.042 12:30:51 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:19.042 ************************************ 00:06:19.042 END TEST accel_missing_filename 00:06:19.042 ************************************ 00:06:19.042 12:30:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.042 12:30:51 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.042 12:30:51 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:19.042 12:30:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.042 12:30:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.042 ************************************ 00:06:19.042 START TEST accel_compress_verify 00:06:19.042 ************************************ 00:06:19.042 12:30:51 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.042 12:30:51 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:19.042 12:30:51 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.042 12:30:51 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.042 12:30:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.042 12:30:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.042 12:30:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.042 12:30:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.042 12:30:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.042 12:30:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:19.042 12:30:51 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.042 12:30:51 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.042 12:30:51 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.042 12:30:51 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.042 12:30:51 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.042 12:30:51 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:19.042 12:30:51 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:19.042 [2024-07-15 12:30:51.576702] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.042 [2024-07-15 12:30:51.576837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:06:19.042 [2024-07-15 12:30:51.720485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.301 [2024-07-15 12:30:51.840753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.301 [2024-07-15 12:30:51.895376] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.301 [2024-07-15 12:30:51.970541] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:19.561 00:06:19.561 Compression does not support the verify option, aborting. 00:06:19.561 ************************************ 00:06:19.561 END TEST accel_compress_verify 00:06:19.561 ************************************ 00:06:19.561 12:30:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:19.561 12:30:52 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.561 12:30:52 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:19.561 12:30:52 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:19.561 12:30:52 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:19.561 12:30:52 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.561 00:06:19.561 real 0m0.510s 00:06:19.561 user 0m0.352s 00:06:19.561 sys 0m0.116s 00:06:19.561 12:30:52 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.561 12:30:52 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:19.561 12:30:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.561 12:30:52 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:19.561 12:30:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.561 12:30:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.561 12:30:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.561 ************************************ 00:06:19.561 START TEST accel_wrong_workload 00:06:19.561 ************************************ 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:19.561 12:30:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:19.561 12:30:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:19.561 12:30:52 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.561 12:30:52 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.561 12:30:52 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.561 12:30:52 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.561 12:30:52 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.561 12:30:52 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:19.561 12:30:52 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:19.561 Unsupported workload type: foobar 00:06:19.561 [2024-07-15 12:30:52.144265] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:19.561 accel_perf options: 00:06:19.561 [-h help message] 00:06:19.561 [-q queue depth per core] 00:06:19.561 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:19.561 [-T number of threads per core 00:06:19.561 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:19.561 [-t time in seconds] 00:06:19.561 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:19.561 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:19.561 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:19.561 [-l for compress/decompress workloads, name of uncompressed input file 00:06:19.561 [-S for crc32c workload, use this seed value (default 0) 00:06:19.561 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:19.561 [-f for fill workload, use this BYTE value (default 255) 00:06:19.561 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:19.561 [-y verify result if this switch is on] 00:06:19.561 [-a tasks to allocate per core (default: same value as -q)] 00:06:19.561 Can be used to spread operations across a wider range of memory. 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.561 00:06:19.561 real 0m0.041s 00:06:19.561 user 0m0.020s 00:06:19.561 sys 0m0.020s 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.561 ************************************ 00:06:19.561 END TEST accel_wrong_workload 00:06:19.561 ************************************ 00:06:19.561 12:30:52 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:19.561 12:30:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.561 12:30:52 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:19.561 12:30:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:19.561 12:30:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.561 12:30:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.561 ************************************ 00:06:19.561 START TEST accel_negative_buffers 00:06:19.561 ************************************ 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:19.561 12:30:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:19.561 12:30:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:19.561 12:30:52 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.561 12:30:52 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.561 12:30:52 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.561 12:30:52 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.561 12:30:52 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.561 12:30:52 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:19.561 12:30:52 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:19.561 -x option must be non-negative. 00:06:19.561 [2024-07-15 12:30:52.222792] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:19.561 accel_perf options: 00:06:19.561 [-h help message] 00:06:19.561 [-q queue depth per core] 00:06:19.561 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:19.561 [-T number of threads per core 00:06:19.561 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:19.561 [-t time in seconds] 00:06:19.561 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:19.561 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:19.561 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:19.561 [-l for compress/decompress workloads, name of uncompressed input file 00:06:19.561 [-S for crc32c workload, use this seed value (default 0) 00:06:19.561 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:19.561 [-f for fill workload, use this BYTE value (default 255) 00:06:19.561 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:19.561 [-y verify result if this switch is on] 00:06:19.561 [-a tasks to allocate per core (default: same value as -q)] 00:06:19.561 Can be used to spread operations across a wider range of memory. 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.561 00:06:19.561 real 0m0.025s 00:06:19.561 user 0m0.016s 00:06:19.561 sys 0m0.009s 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.561 ************************************ 00:06:19.561 END TEST accel_negative_buffers 00:06:19.561 ************************************ 00:06:19.561 12:30:52 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:19.820 12:30:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.820 12:30:52 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:19.820 12:30:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:19.821 12:30:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.821 12:30:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.821 ************************************ 00:06:19.821 START TEST accel_crc32c 00:06:19.821 ************************************ 00:06:19.821 12:30:52 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:19.821 12:30:52 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:19.821 [2024-07-15 12:30:52.302850] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.821 [2024-07-15 12:30:52.302990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61405 ] 00:06:19.821 [2024-07-15 12:30:52.451942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.080 [2024-07-15 12:30:52.571895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.080 12:30:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:21.456 12:30:53 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.456 00:06:21.456 real 0m1.531s 00:06:21.456 user 0m1.314s 00:06:21.456 sys 0m0.123s 00:06:21.456 12:30:53 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.456 12:30:53 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:21.456 ************************************ 00:06:21.456 END TEST accel_crc32c 00:06:21.456 ************************************ 00:06:21.456 12:30:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.456 12:30:53 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:21.456 12:30:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:21.456 12:30:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.456 12:30:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.456 ************************************ 00:06:21.456 START TEST accel_crc32c_C2 00:06:21.456 ************************************ 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.456 12:30:53 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:21.456 [2024-07-15 12:30:53.879133] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:21.456 [2024-07-15 12:30:53.879270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61440 ] 00:06:21.456 [2024-07-15 12:30:54.019267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.715 [2024-07-15 12:30:54.139295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.715 12:30:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.092 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.092 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.092 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.092 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.092 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.092 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.092 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.092 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:23.093 ************************************ 00:06:23.093 END TEST accel_crc32c_C2 00:06:23.093 ************************************ 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.093 00:06:23.093 real 0m1.514s 00:06:23.093 user 0m1.296s 00:06:23.093 sys 0m0.119s 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.093 12:30:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:23.093 12:30:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.093 12:30:55 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:23.093 12:30:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:23.093 12:30:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.093 12:30:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.093 ************************************ 00:06:23.093 START TEST accel_copy 00:06:23.093 ************************************ 00:06:23.093 12:30:55 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:23.093 12:30:55 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:23.093 [2024-07-15 12:30:55.440245] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:23.093 [2024-07-15 12:30:55.440966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61473 ] 00:06:23.093 [2024-07-15 12:30:55.584201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.093 [2024-07-15 12:30:55.714545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.352 12:30:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.353 12:30:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.353 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.353 12:30:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.288 12:30:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:24.289 ************************************ 00:06:24.289 END TEST accel_copy 00:06:24.289 ************************************ 00:06:24.289 12:30:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.289 00:06:24.289 real 0m1.535s 00:06:24.289 user 0m1.314s 00:06:24.289 sys 0m0.117s 00:06:24.289 12:30:56 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.289 12:30:56 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:24.548 12:30:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.548 12:30:56 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.548 12:30:56 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:24.548 12:30:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.548 12:30:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.548 ************************************ 00:06:24.548 START TEST accel_fill 00:06:24.548 ************************************ 00:06:24.548 12:30:57 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:24.548 12:30:57 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:24.548 [2024-07-15 12:30:57.027522] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:24.548 [2024-07-15 12:30:57.027615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61509 ] 00:06:24.548 [2024-07-15 12:30:57.163141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.807 [2024-07-15 12:30:57.280906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.807 12:30:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.185 12:30:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.185 12:30:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.185 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.185 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.185 12:30:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.185 12:30:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.185 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.185 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:26.186 12:30:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.186 ************************************ 00:06:26.186 END TEST accel_fill 00:06:26.186 ************************************ 00:06:26.186 00:06:26.186 real 0m1.525s 00:06:26.186 user 0m1.310s 00:06:26.186 sys 0m0.118s 00:06:26.186 12:30:58 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.186 12:30:58 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:26.186 12:30:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.186 12:30:58 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:26.186 12:30:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:26.186 12:30:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.186 12:30:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.186 ************************************ 00:06:26.186 START TEST accel_copy_crc32c 00:06:26.186 ************************************ 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:26.186 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:26.186 [2024-07-15 12:30:58.602242] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:26.186 [2024-07-15 12:30:58.602356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61543 ] 00:06:26.186 [2024-07-15 12:30:58.737808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.186 [2024-07-15 12:30:58.855181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 12:30:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.824 00:06:27.824 real 0m1.509s 00:06:27.824 user 0m1.292s 00:06:27.824 sys 0m0.126s 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.824 12:31:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:27.824 ************************************ 00:06:27.824 END TEST accel_copy_crc32c 00:06:27.824 ************************************ 00:06:27.824 12:31:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.824 12:31:00 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.824 12:31:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:27.824 12:31:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.824 12:31:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.824 ************************************ 00:06:27.824 START TEST accel_copy_crc32c_C2 00:06:27.824 ************************************ 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:27.824 [2024-07-15 12:31:00.163160] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:27.824 [2024-07-15 12:31:00.163270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61578 ] 00:06:27.824 [2024-07-15 12:31:00.299833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.824 [2024-07-15 12:31:00.416674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.824 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.825 12:31:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.201 ************************************ 00:06:29.201 END TEST accel_copy_crc32c_C2 00:06:29.201 ************************************ 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.201 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.202 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.202 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.202 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.202 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:29.202 12:31:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.202 00:06:29.202 real 0m1.504s 00:06:29.202 user 0m1.296s 00:06:29.202 sys 0m0.115s 00:06:29.202 12:31:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.202 12:31:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:29.202 12:31:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.202 12:31:01 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:29.202 12:31:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.202 12:31:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.202 12:31:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.202 ************************************ 00:06:29.202 START TEST accel_dualcast 00:06:29.202 ************************************ 00:06:29.202 12:31:01 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:29.202 12:31:01 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:29.202 [2024-07-15 12:31:01.720670] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:29.202 [2024-07-15 12:31:01.720776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61611 ] 00:06:29.202 [2024-07-15 12:31:01.854565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.461 [2024-07-15 12:31:01.980668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.461 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.462 12:31:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.837 12:31:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:30.838 12:31:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.838 00:06:30.838 real 0m1.512s 00:06:30.838 user 0m1.295s 00:06:30.838 sys 0m0.122s 00:06:30.838 12:31:03 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.838 12:31:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:30.838 ************************************ 00:06:30.838 END TEST accel_dualcast 00:06:30.838 ************************************ 00:06:30.838 12:31:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.838 12:31:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:30.838 12:31:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.838 12:31:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.838 12:31:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.838 ************************************ 00:06:30.838 START TEST accel_compare 00:06:30.838 ************************************ 00:06:30.838 12:31:03 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:30.838 12:31:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:30.838 [2024-07-15 12:31:03.284027] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:30.838 [2024-07-15 12:31:03.284147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61647 ] 00:06:30.838 [2024-07-15 12:31:03.424059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.097 [2024-07-15 12:31:03.543569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 12:31:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:32.475 12:31:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.475 00:06:32.475 real 0m1.513s 00:06:32.475 user 0m1.302s 00:06:32.475 sys 0m0.112s 00:06:32.475 12:31:04 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.475 12:31:04 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:32.475 ************************************ 00:06:32.475 END TEST accel_compare 00:06:32.475 ************************************ 00:06:32.475 12:31:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.475 12:31:04 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:32.475 12:31:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:32.475 12:31:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.475 12:31:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.475 ************************************ 00:06:32.475 START TEST accel_xor 00:06:32.475 ************************************ 00:06:32.475 12:31:04 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:32.475 12:31:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:32.475 12:31:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:32.475 12:31:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 12:31:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 12:31:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:32.475 12:31:04 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:32.475 12:31:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:32.476 12:31:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.476 12:31:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.476 12:31:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.476 12:31:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.476 12:31:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.476 12:31:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:32.476 12:31:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:32.476 [2024-07-15 12:31:04.846211] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:32.476 [2024-07-15 12:31:04.846298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61686 ] 00:06:32.476 [2024-07-15 12:31:04.979720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.476 [2024-07-15 12:31:05.101067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 12:31:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:33.671 12:31:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.671 00:06:33.671 real 0m1.513s 00:06:33.671 user 0m1.305s 00:06:33.671 sys 0m0.116s 00:06:33.671 12:31:06 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.671 ************************************ 00:06:33.671 END TEST accel_xor 00:06:33.671 ************************************ 00:06:33.671 12:31:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:33.929 12:31:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.929 12:31:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:33.929 12:31:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:33.929 12:31:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.929 12:31:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.929 ************************************ 00:06:33.929 START TEST accel_xor 00:06:33.929 ************************************ 00:06:33.929 12:31:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:33.929 12:31:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:33.929 [2024-07-15 12:31:06.414261] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:33.929 [2024-07-15 12:31:06.414365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61718 ] 00:06:33.929 [2024-07-15 12:31:06.553412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.188 [2024-07-15 12:31:06.672308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.188 12:31:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.564 ************************************ 00:06:35.564 END TEST accel_xor 00:06:35.564 ************************************ 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:35.564 12:31:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.564 00:06:35.564 real 0m1.509s 00:06:35.564 user 0m1.298s 00:06:35.564 sys 0m0.115s 00:06:35.564 12:31:07 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.564 12:31:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:35.564 12:31:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.564 12:31:07 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:35.564 12:31:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:35.564 12:31:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.564 12:31:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.564 ************************************ 00:06:35.564 START TEST accel_dif_verify 00:06:35.564 ************************************ 00:06:35.564 12:31:07 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:35.564 12:31:07 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:35.564 [2024-07-15 12:31:07.969017] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:35.564 [2024-07-15 12:31:07.969117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61753 ] 00:06:35.564 [2024-07-15 12:31:08.103490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.564 [2024-07-15 12:31:08.225360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.823 12:31:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:37.199 12:31:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.199 00:06:37.199 real 0m1.511s 00:06:37.199 user 0m1.302s 00:06:37.199 sys 0m0.116s 00:06:37.199 12:31:09 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.199 12:31:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:37.199 ************************************ 00:06:37.199 END TEST accel_dif_verify 00:06:37.199 ************************************ 00:06:37.199 12:31:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.199 12:31:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:37.199 12:31:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:37.199 12:31:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.199 12:31:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.199 ************************************ 00:06:37.199 START TEST accel_dif_generate 00:06:37.199 ************************************ 00:06:37.199 12:31:09 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:37.199 [2024-07-15 12:31:09.527491] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:37.199 [2024-07-15 12:31:09.527595] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61787 ] 00:06:37.199 [2024-07-15 12:31:09.658985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.199 [2024-07-15 12:31:09.780372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:37.199 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.200 12:31:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.574 ************************************ 00:06:38.574 END TEST accel_dif_generate 00:06:38.574 ************************************ 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:38.574 12:31:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.574 00:06:38.574 real 0m1.501s 00:06:38.574 user 0m1.293s 00:06:38.574 sys 0m0.114s 00:06:38.574 12:31:11 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.574 12:31:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:38.574 12:31:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.574 12:31:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:38.574 12:31:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:38.574 12:31:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.574 12:31:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.574 ************************************ 00:06:38.574 START TEST accel_dif_generate_copy 00:06:38.574 ************************************ 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:38.574 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:38.574 [2024-07-15 12:31:11.079327] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:38.574 [2024-07-15 12:31:11.079424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61822 ] 00:06:38.574 [2024-07-15 12:31:11.212693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.831 [2024-07-15 12:31:11.336028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.831 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.832 12:31:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.204 00:06:40.204 real 0m1.504s 00:06:40.204 user 0m1.295s 00:06:40.204 sys 0m0.115s 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.204 ************************************ 00:06:40.204 END TEST accel_dif_generate_copy 00:06:40.204 12:31:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.204 ************************************ 00:06:40.204 12:31:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.204 12:31:12 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:40.204 12:31:12 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.204 12:31:12 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:40.204 12:31:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.204 12:31:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.204 ************************************ 00:06:40.204 START TEST accel_comp 00:06:40.204 ************************************ 00:06:40.204 12:31:12 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:40.204 12:31:12 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:40.204 [2024-07-15 12:31:12.633116] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:40.204 [2024-07-15 12:31:12.633221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61857 ] 00:06:40.204 [2024-07-15 12:31:12.770990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.463 [2024-07-15 12:31:12.887137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.463 12:31:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.837 12:31:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:41.838 12:31:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.838 00:06:41.838 real 0m1.506s 00:06:41.838 user 0m1.299s 00:06:41.838 sys 0m0.116s 00:06:41.838 12:31:14 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.838 ************************************ 00:06:41.838 END TEST accel_comp 00:06:41.838 ************************************ 00:06:41.838 12:31:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:41.838 12:31:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.838 12:31:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.838 12:31:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:41.838 12:31:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.838 12:31:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.838 ************************************ 00:06:41.838 START TEST accel_decomp 00:06:41.838 ************************************ 00:06:41.838 12:31:14 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:41.838 [2024-07-15 12:31:14.186299] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:41.838 [2024-07-15 12:31:14.186419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61892 ] 00:06:41.838 [2024-07-15 12:31:14.321658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.838 [2024-07-15 12:31:14.438535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.838 12:31:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.216 12:31:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.216 12:31:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.216 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.216 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.216 12:31:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.216 12:31:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.216 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.217 12:31:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.217 00:06:43.217 real 0m1.505s 00:06:43.217 user 0m1.306s 00:06:43.217 sys 0m0.108s 00:06:43.217 12:31:15 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.217 12:31:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:43.217 ************************************ 00:06:43.217 END TEST accel_decomp 00:06:43.217 ************************************ 00:06:43.217 12:31:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.217 12:31:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.217 12:31:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:43.217 12:31:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.217 12:31:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.217 ************************************ 00:06:43.217 START TEST accel_decomp_full 00:06:43.217 ************************************ 00:06:43.217 12:31:15 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:43.217 12:31:15 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:43.217 [2024-07-15 12:31:15.745105] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:43.217 [2024-07-15 12:31:15.745218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61926 ] 00:06:43.217 [2024-07-15 12:31:15.884888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.476 [2024-07-15 12:31:16.005372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:43.476 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 12:31:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.930 12:31:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.930 00:06:44.930 real 0m1.525s 00:06:44.930 user 0m0.018s 00:06:44.930 sys 0m0.002s 00:06:44.930 12:31:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.930 12:31:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:44.930 ************************************ 00:06:44.930 END TEST accel_decomp_full 00:06:44.930 ************************************ 00:06:44.930 12:31:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.930 12:31:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.930 12:31:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:44.930 12:31:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.930 12:31:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.930 ************************************ 00:06:44.930 START TEST accel_decomp_mcore 00:06:44.930 ************************************ 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:44.930 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:44.930 [2024-07-15 12:31:17.319422] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:44.930 [2024-07-15 12:31:17.319524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61963 ] 00:06:44.930 [2024-07-15 12:31:17.459482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.930 [2024-07-15 12:31:17.580676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.930 [2024-07-15 12:31:17.580818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.930 [2024-07-15 12:31:17.581267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.931 [2024-07-15 12:31:17.581312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.189 12:31:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.565 00:06:46.565 real 0m1.523s 00:06:46.565 user 0m4.695s 00:06:46.565 sys 0m0.128s 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.565 12:31:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:46.565 ************************************ 00:06:46.565 END TEST accel_decomp_mcore 00:06:46.565 ************************************ 00:06:46.565 12:31:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.565 12:31:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.565 12:31:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:46.565 12:31:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.565 12:31:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.565 ************************************ 00:06:46.565 START TEST accel_decomp_full_mcore 00:06:46.565 ************************************ 00:06:46.565 12:31:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.565 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:46.565 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:46.565 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.565 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:46.566 12:31:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:46.566 [2024-07-15 12:31:18.888758] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:46.566 [2024-07-15 12:31:18.888896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62000 ] 00:06:46.566 [2024-07-15 12:31:19.029906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.566 [2024-07-15 12:31:19.149649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.566 [2024-07-15 12:31:19.149787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.566 [2024-07-15 12:31:19.150230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.566 [2024-07-15 12:31:19.150274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.566 12:31:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.942 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.943 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.943 ************************************ 00:06:47.943 END TEST accel_decomp_full_mcore 00:06:47.943 ************************************ 00:06:47.943 12:31:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.943 00:06:47.943 real 0m1.540s 00:06:47.943 user 0m4.724s 00:06:47.943 sys 0m0.142s 00:06:47.943 12:31:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.943 12:31:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:47.943 12:31:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.943 12:31:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:47.943 12:31:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:47.943 12:31:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.943 12:31:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.943 ************************************ 00:06:47.943 START TEST accel_decomp_mthread 00:06:47.943 ************************************ 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:47.943 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:47.943 [2024-07-15 12:31:20.477572] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:47.943 [2024-07-15 12:31:20.477694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62038 ] 00:06:47.943 [2024-07-15 12:31:20.617253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.202 [2024-07-15 12:31:20.734981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.202 12:31:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.576 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.576 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.576 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.576 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.576 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.577 00:06:49.577 real 0m1.525s 00:06:49.577 user 0m1.308s 00:06:49.577 sys 0m0.116s 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.577 ************************************ 00:06:49.577 END TEST accel_decomp_mthread 00:06:49.577 ************************************ 00:06:49.577 12:31:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:49.577 12:31:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.577 12:31:22 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.577 12:31:22 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:49.577 12:31:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.577 12:31:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.577 ************************************ 00:06:49.577 START TEST accel_decomp_full_mthread 00:06:49.577 ************************************ 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:49.577 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:49.577 [2024-07-15 12:31:22.055551] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:49.577 [2024-07-15 12:31:22.055651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62072 ] 00:06:49.577 [2024-07-15 12:31:22.194085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.841 [2024-07-15 12:31:22.311062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.841 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 12:31:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.223 00:06:51.223 real 0m1.556s 00:06:51.223 user 0m1.340s 00:06:51.223 sys 0m0.114s 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.223 12:31:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:51.223 ************************************ 00:06:51.223 END TEST accel_decomp_full_mthread 00:06:51.223 ************************************ 00:06:51.223 12:31:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.223 12:31:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:51.223 12:31:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.223 12:31:23 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:51.223 12:31:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:51.223 12:31:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.223 12:31:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.223 12:31:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.223 12:31:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.223 12:31:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.223 12:31:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.223 12:31:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.223 12:31:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:51.223 12:31:23 accel -- accel/accel.sh@41 -- # jq -r . 00:06:51.223 ************************************ 00:06:51.223 START TEST accel_dif_functional_tests 00:06:51.223 ************************************ 00:06:51.223 12:31:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.223 [2024-07-15 12:31:23.696524] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:51.223 [2024-07-15 12:31:23.696633] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62108 ] 00:06:51.223 [2024-07-15 12:31:23.832150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.482 [2024-07-15 12:31:23.954868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.482 [2024-07-15 12:31:23.955008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.482 [2024-07-15 12:31:23.955013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.482 [2024-07-15 12:31:24.010030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.482 00:06:51.482 00:06:51.482 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.482 http://cunit.sourceforge.net/ 00:06:51.482 00:06:51.482 00:06:51.482 Suite: accel_dif 00:06:51.482 Test: verify: DIF generated, GUARD check ...passed 00:06:51.482 Test: verify: DIF generated, APPTAG check ...passed 00:06:51.482 Test: verify: DIF generated, REFTAG check ...passed 00:06:51.482 Test: verify: DIF not generated, GUARD check ...[2024-07-15 12:31:24.048525] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.482 passed 00:06:51.482 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 12:31:24.048681] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.482 passed 00:06:51.482 Test: verify: DIF not generated, REFTAG check ...passed 00:06:51.482 Test: verify: APPTAG correct, APPTAG check ...[2024-07-15 12:31:24.048724] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.482 passed 00:06:51.482 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 12:31:24.049002] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:51.482 passed 00:06:51.482 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:51.482 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:51.482 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:51.482 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 12:31:24.049324] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:51.482 passed 00:06:51.482 Test: verify copy: DIF generated, GUARD check ...passed 00:06:51.482 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:51.482 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:51.482 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 12:31:24.049719] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.482 passed 00:06:51.482 Test: verify copy: DIF not generated, APPTAG check ...passed 00:06:51.482 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 12:31:24.049802] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.482 [2024-07-15 12:31:24.049861] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.482 passed 00:06:51.482 Test: generate copy: DIF generated, GUARD check ...passed 00:06:51.482 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:51.482 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:51.482 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:51.482 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:51.482 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:51.482 Test: generate copy: iovecs-len validate ...[2024-07-15 12:31:24.050295] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:51.482 passed 00:06:51.482 Test: generate copy: buffer alignment validate ...passed 00:06:51.482 00:06:51.482 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.482 suites 1 1 n/a 0 0 00:06:51.482 tests 26 26 26 0 0 00:06:51.482 asserts 115 115 115 0 n/a 00:06:51.482 00:06:51.482 Elapsed time = 0.003 seconds 00:06:51.742 ************************************ 00:06:51.742 END TEST accel_dif_functional_tests 00:06:51.742 ************************************ 00:06:51.742 00:06:51.742 real 0m0.630s 00:06:51.742 user 0m0.829s 00:06:51.742 sys 0m0.160s 00:06:51.742 12:31:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.742 12:31:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 12:31:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.742 ************************************ 00:06:51.742 END TEST accel 00:06:51.742 ************************************ 00:06:51.742 00:06:51.742 real 0m34.957s 00:06:51.742 user 0m36.572s 00:06:51.742 sys 0m4.008s 00:06:51.742 12:31:24 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.742 12:31:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 12:31:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.742 12:31:24 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:51.742 12:31:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.742 12:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.742 12:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.742 ************************************ 00:06:51.742 START TEST accel_rpc 00:06:51.742 ************************************ 00:06:51.742 12:31:24 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:52.001 * Looking for test storage... 00:06:52.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:52.001 12:31:24 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.001 12:31:24 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62178 00:06:52.001 12:31:24 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62178 00:06:52.001 12:31:24 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:52.001 12:31:24 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62178 ']' 00:06:52.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.001 12:31:24 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.001 12:31:24 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.001 12:31:24 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.001 12:31:24 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.001 12:31:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.001 [2024-07-15 12:31:24.502092] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:52.001 [2024-07-15 12:31:24.502659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62178 ] 00:06:52.001 [2024-07-15 12:31:24.641564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.260 [2024-07-15 12:31:24.765318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.828 12:31:25 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.828 12:31:25 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.828 12:31:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:52.828 12:31:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:52.828 12:31:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:52.828 12:31:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:52.828 12:31:25 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:52.828 12:31:25 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.828 12:31:25 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.828 12:31:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.828 ************************************ 00:06:52.828 START TEST accel_assign_opcode 00:06:52.828 ************************************ 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.828 [2024-07-15 12:31:25.458169] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.828 [2024-07-15 12:31:25.466155] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.828 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:53.087 [2024-07-15 12:31:25.528093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.087 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.087 12:31:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:53.087 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.087 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:53.087 12:31:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:53.087 12:31:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:53.087 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.087 software 00:06:53.087 00:06:53.087 real 0m0.300s 00:06:53.087 user 0m0.052s 00:06:53.087 sys 0m0.013s 00:06:53.087 ************************************ 00:06:53.087 END TEST accel_assign_opcode 00:06:53.087 ************************************ 00:06:53.087 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.087 12:31:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:53.345 12:31:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62178 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62178 ']' 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62178 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62178 00:06:53.345 killing process with pid 62178 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62178' 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@967 -- # kill 62178 00:06:53.345 12:31:25 accel_rpc -- common/autotest_common.sh@972 -- # wait 62178 00:06:53.604 00:06:53.604 real 0m1.862s 00:06:53.604 user 0m1.925s 00:06:53.604 sys 0m0.447s 00:06:53.604 12:31:26 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.604 ************************************ 00:06:53.604 END TEST accel_rpc 00:06:53.604 12:31:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.604 ************************************ 00:06:53.604 12:31:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.604 12:31:26 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.604 12:31:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.604 12:31:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.604 12:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:53.604 ************************************ 00:06:53.604 START TEST app_cmdline 00:06:53.604 ************************************ 00:06:53.604 12:31:26 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.863 * Looking for test storage... 00:06:53.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:53.863 12:31:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.863 12:31:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62272 00:06:53.863 12:31:26 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.863 12:31:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62272 00:06:53.863 12:31:26 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62272 ']' 00:06:53.863 12:31:26 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.863 12:31:26 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.863 12:31:26 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.863 12:31:26 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.863 12:31:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.863 [2024-07-15 12:31:26.410076] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:53.863 [2024-07-15 12:31:26.410172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62272 ] 00:06:54.122 [2024-07-15 12:31:26.549483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.122 [2024-07-15 12:31:26.671934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.122 [2024-07-15 12:31:26.727606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:55.066 { 00:06:55.066 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:06:55.066 "fields": { 00:06:55.066 "major": 24, 00:06:55.066 "minor": 9, 00:06:55.066 "patch": 0, 00:06:55.066 "suffix": "-pre", 00:06:55.066 "commit": "2728651ee" 00:06:55.066 } 00:06:55.066 } 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:55.066 12:31:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:55.066 12:31:27 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.324 request: 00:06:55.324 { 00:06:55.324 "method": "env_dpdk_get_mem_stats", 00:06:55.324 "req_id": 1 00:06:55.324 } 00:06:55.324 Got JSON-RPC error response 00:06:55.324 response: 00:06:55.324 { 00:06:55.324 "code": -32601, 00:06:55.324 "message": "Method not found" 00:06:55.324 } 00:06:55.324 12:31:27 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:55.324 12:31:27 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.324 12:31:27 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.324 12:31:27 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.324 12:31:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62272 00:06:55.325 12:31:27 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62272 ']' 00:06:55.325 12:31:27 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62272 00:06:55.325 12:31:27 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:55.325 12:31:27 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.325 12:31:27 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62272 00:06:55.582 killing process with pid 62272 00:06:55.582 12:31:28 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.582 12:31:28 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.582 12:31:28 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62272' 00:06:55.582 12:31:28 app_cmdline -- common/autotest_common.sh@967 -- # kill 62272 00:06:55.582 12:31:28 app_cmdline -- common/autotest_common.sh@972 -- # wait 62272 00:06:55.840 00:06:55.840 real 0m2.154s 00:06:55.840 user 0m2.680s 00:06:55.840 sys 0m0.496s 00:06:55.840 ************************************ 00:06:55.840 12:31:28 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.840 12:31:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.840 END TEST app_cmdline 00:06:55.840 ************************************ 00:06:55.840 12:31:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:55.840 12:31:28 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.840 12:31:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.840 12:31:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.840 12:31:28 -- common/autotest_common.sh@10 -- # set +x 00:06:55.840 ************************************ 00:06:55.840 START TEST version 00:06:55.840 ************************************ 00:06:55.840 12:31:28 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:56.098 * Looking for test storage... 00:06:56.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:56.098 12:31:28 version -- app/version.sh@17 -- # get_header_version major 00:06:56.098 12:31:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.098 12:31:28 version -- app/version.sh@14 -- # cut -f2 00:06:56.098 12:31:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.098 12:31:28 version -- app/version.sh@17 -- # major=24 00:06:56.098 12:31:28 version -- app/version.sh@18 -- # get_header_version minor 00:06:56.098 12:31:28 version -- app/version.sh@14 -- # cut -f2 00:06:56.098 12:31:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.098 12:31:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.099 12:31:28 version -- app/version.sh@18 -- # minor=9 00:06:56.099 12:31:28 version -- app/version.sh@19 -- # get_header_version patch 00:06:56.099 12:31:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.099 12:31:28 version -- app/version.sh@14 -- # cut -f2 00:06:56.099 12:31:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.099 12:31:28 version -- app/version.sh@19 -- # patch=0 00:06:56.099 12:31:28 version -- app/version.sh@20 -- # get_header_version suffix 00:06:56.099 12:31:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.099 12:31:28 version -- app/version.sh@14 -- # cut -f2 00:06:56.099 12:31:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.099 12:31:28 version -- app/version.sh@20 -- # suffix=-pre 00:06:56.099 12:31:28 version -- app/version.sh@22 -- # version=24.9 00:06:56.099 12:31:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:56.099 12:31:28 version -- app/version.sh@28 -- # version=24.9rc0 00:06:56.099 12:31:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:56.099 12:31:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:56.099 12:31:28 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:56.099 12:31:28 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:56.099 00:06:56.099 real 0m0.159s 00:06:56.099 user 0m0.090s 00:06:56.099 sys 0m0.098s 00:06:56.099 12:31:28 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.099 12:31:28 version -- common/autotest_common.sh@10 -- # set +x 00:06:56.099 ************************************ 00:06:56.099 END TEST version 00:06:56.099 ************************************ 00:06:56.099 12:31:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.099 12:31:28 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:56.099 12:31:28 -- spdk/autotest.sh@198 -- # uname -s 00:06:56.099 12:31:28 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:56.099 12:31:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:56.099 12:31:28 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:56.099 12:31:28 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:56.099 12:31:28 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:56.099 12:31:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.099 12:31:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.099 12:31:28 -- common/autotest_common.sh@10 -- # set +x 00:06:56.099 ************************************ 00:06:56.099 START TEST spdk_dd 00:06:56.099 ************************************ 00:06:56.099 12:31:28 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:56.099 * Looking for test storage... 00:06:56.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:56.099 12:31:28 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.099 12:31:28 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.099 12:31:28 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.099 12:31:28 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.099 12:31:28 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.099 12:31:28 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.099 12:31:28 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.099 12:31:28 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:56.099 12:31:28 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.099 12:31:28 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:56.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:56.668 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:56.668 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:56.668 12:31:29 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:56.668 12:31:29 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:56.668 12:31:29 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:56.668 12:31:29 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:56.668 12:31:29 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:56.668 12:31:29 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:56.668 12:31:29 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:56.668 12:31:29 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:56.669 12:31:29 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:56.669 12:31:29 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:56.669 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:56.670 * spdk_dd linked to liburing 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:56.670 12:31:29 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:56.670 12:31:29 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:56.671 12:31:29 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:56.671 12:31:29 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:56.671 12:31:29 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:56.671 12:31:29 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:56.671 12:31:29 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.671 12:31:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:56.671 ************************************ 00:06:56.671 START TEST spdk_dd_basic_rw 00:06:56.671 ************************************ 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:56.671 * Looking for test storage... 00:06:56.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:56.671 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.931 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:56.931 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:56.931 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:56.931 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:56.932 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:56.932 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.933 ************************************ 00:06:56.933 START TEST dd_bs_lt_native_bs 00:06:56.933 ************************************ 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:56.933 12:31:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:56.933 { 00:06:56.933 "subsystems": [ 00:06:56.933 { 00:06:56.933 "subsystem": "bdev", 00:06:56.933 "config": [ 00:06:56.933 { 00:06:56.933 "params": { 00:06:56.933 "trtype": "pcie", 00:06:56.933 "traddr": "0000:00:10.0", 00:06:56.933 "name": "Nvme0" 00:06:56.933 }, 00:06:56.933 "method": "bdev_nvme_attach_controller" 00:06:56.933 }, 00:06:56.933 { 00:06:56.933 "method": "bdev_wait_for_examine" 00:06:56.933 } 00:06:56.933 ] 00:06:56.933 } 00:06:56.933 ] 00:06:56.933 } 00:06:56.933 [2024-07-15 12:31:29.605841] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:56.933 [2024-07-15 12:31:29.605941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62592 ] 00:06:57.192 [2024-07-15 12:31:29.741439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.192 [2024-07-15 12:31:29.872101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.451 [2024-07-15 12:31:29.931576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.451 [2024-07-15 12:31:30.042681] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:57.451 [2024-07-15 12:31:30.042780] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.710 [2024-07-15 12:31:30.169309] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.710 00:06:57.710 real 0m0.714s 00:06:57.710 user 0m0.503s 00:06:57.710 sys 0m0.167s 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:57.710 ************************************ 00:06:57.710 END TEST dd_bs_lt_native_bs 00:06:57.710 ************************************ 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.710 ************************************ 00:06:57.710 START TEST dd_rw 00:06:57.710 ************************************ 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:57.710 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:57.711 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.647 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:58.647 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:58.647 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:58.647 12:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.647 [2024-07-15 12:31:31.033518] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:58.647 [2024-07-15 12:31:31.033636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62623 ] 00:06:58.647 { 00:06:58.647 "subsystems": [ 00:06:58.647 { 00:06:58.647 "subsystem": "bdev", 00:06:58.647 "config": [ 00:06:58.647 { 00:06:58.647 "params": { 00:06:58.647 "trtype": "pcie", 00:06:58.647 "traddr": "0000:00:10.0", 00:06:58.647 "name": "Nvme0" 00:06:58.647 }, 00:06:58.647 "method": "bdev_nvme_attach_controller" 00:06:58.647 }, 00:06:58.647 { 00:06:58.647 "method": "bdev_wait_for_examine" 00:06:58.647 } 00:06:58.647 ] 00:06:58.647 } 00:06:58.647 ] 00:06:58.647 } 00:06:58.647 [2024-07-15 12:31:31.168965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.647 [2024-07-15 12:31:31.301968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.906 [2024-07-15 12:31:31.362619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.166  Copying: 60/60 [kB] (average 29 MBps) 00:06:59.166 00:06:59.166 12:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:59.166 12:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:59.166 12:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.166 12:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.166 [2024-07-15 12:31:31.758445] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:59.166 [2024-07-15 12:31:31.758553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62642 ] 00:06:59.166 { 00:06:59.166 "subsystems": [ 00:06:59.166 { 00:06:59.166 "subsystem": "bdev", 00:06:59.166 "config": [ 00:06:59.166 { 00:06:59.166 "params": { 00:06:59.166 "trtype": "pcie", 00:06:59.166 "traddr": "0000:00:10.0", 00:06:59.166 "name": "Nvme0" 00:06:59.166 }, 00:06:59.166 "method": "bdev_nvme_attach_controller" 00:06:59.166 }, 00:06:59.166 { 00:06:59.166 "method": "bdev_wait_for_examine" 00:06:59.166 } 00:06:59.166 ] 00:06:59.166 } 00:06:59.166 ] 00:06:59.166 } 00:06:59.425 [2024-07-15 12:31:31.899432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.425 [2024-07-15 12:31:32.003334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.425 [2024-07-15 12:31:32.058277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.941  Copying: 60/60 [kB] (average 14 MBps) 00:06:59.941 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.941 12:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.941 [2024-07-15 12:31:32.460300] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:59.941 [2024-07-15 12:31:32.460393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62663 ] 00:06:59.941 { 00:06:59.941 "subsystems": [ 00:06:59.941 { 00:06:59.941 "subsystem": "bdev", 00:06:59.941 "config": [ 00:06:59.941 { 00:06:59.941 "params": { 00:06:59.941 "trtype": "pcie", 00:06:59.941 "traddr": "0000:00:10.0", 00:06:59.941 "name": "Nvme0" 00:06:59.941 }, 00:06:59.941 "method": "bdev_nvme_attach_controller" 00:06:59.941 }, 00:06:59.941 { 00:06:59.941 "method": "bdev_wait_for_examine" 00:06:59.941 } 00:06:59.941 ] 00:06:59.941 } 00:06:59.941 ] 00:06:59.941 } 00:06:59.941 [2024-07-15 12:31:32.594918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.199 [2024-07-15 12:31:32.718830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.199 [2024-07-15 12:31:32.777899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.457  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:00.457 00:07:00.457 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:00.457 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:00.457 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:00.457 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:00.457 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:00.457 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:00.457 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.393 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:01.393 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:01.393 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.393 12:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.393 [2024-07-15 12:31:33.762576] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:01.393 [2024-07-15 12:31:33.762709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62682 ] 00:07:01.393 { 00:07:01.393 "subsystems": [ 00:07:01.393 { 00:07:01.393 "subsystem": "bdev", 00:07:01.393 "config": [ 00:07:01.393 { 00:07:01.393 "params": { 00:07:01.393 "trtype": "pcie", 00:07:01.393 "traddr": "0000:00:10.0", 00:07:01.393 "name": "Nvme0" 00:07:01.393 }, 00:07:01.393 "method": "bdev_nvme_attach_controller" 00:07:01.393 }, 00:07:01.393 { 00:07:01.393 "method": "bdev_wait_for_examine" 00:07:01.393 } 00:07:01.393 ] 00:07:01.393 } 00:07:01.393 ] 00:07:01.393 } 00:07:01.393 [2024-07-15 12:31:33.903815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.393 [2024-07-15 12:31:34.027348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.653 [2024-07-15 12:31:34.088972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.912  Copying: 60/60 [kB] (average 58 MBps) 00:07:01.912 00:07:01.912 12:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:01.912 12:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:01.912 12:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.912 12:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.912 { 00:07:01.912 "subsystems": [ 00:07:01.912 { 00:07:01.912 "subsystem": "bdev", 00:07:01.912 "config": [ 00:07:01.912 { 00:07:01.912 "params": { 00:07:01.912 "trtype": "pcie", 00:07:01.912 "traddr": "0000:00:10.0", 00:07:01.912 "name": "Nvme0" 00:07:01.912 }, 00:07:01.912 "method": "bdev_nvme_attach_controller" 00:07:01.912 }, 00:07:01.912 { 00:07:01.912 "method": "bdev_wait_for_examine" 00:07:01.912 } 00:07:01.912 ] 00:07:01.912 } 00:07:01.912 ] 00:07:01.912 } 00:07:01.912 [2024-07-15 12:31:34.483756] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:01.912 [2024-07-15 12:31:34.483864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62701 ] 00:07:02.170 [2024-07-15 12:31:34.625998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.170 [2024-07-15 12:31:34.750357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.170 [2024-07-15 12:31:34.808570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.687  Copying: 60/60 [kB] (average 58 MBps) 00:07:02.687 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.687 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.687 { 00:07:02.687 "subsystems": [ 00:07:02.687 { 00:07:02.687 "subsystem": "bdev", 00:07:02.687 "config": [ 00:07:02.687 { 00:07:02.687 "params": { 00:07:02.687 "trtype": "pcie", 00:07:02.687 "traddr": "0000:00:10.0", 00:07:02.687 "name": "Nvme0" 00:07:02.687 }, 00:07:02.687 "method": "bdev_nvme_attach_controller" 00:07:02.687 }, 00:07:02.687 { 00:07:02.687 "method": "bdev_wait_for_examine" 00:07:02.687 } 00:07:02.687 ] 00:07:02.687 } 00:07:02.687 ] 00:07:02.687 } 00:07:02.687 [2024-07-15 12:31:35.233564] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:02.687 [2024-07-15 12:31:35.233798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62711 ] 00:07:02.945 [2024-07-15 12:31:35.374681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.945 [2024-07-15 12:31:35.491978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.945 [2024-07-15 12:31:35.545917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.203  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:03.203 00:07:03.203 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:03.203 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:03.203 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:03.203 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:03.203 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:03.203 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:03.203 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:03.203 12:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.138 12:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:04.138 12:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:04.138 12:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.138 12:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.138 [2024-07-15 12:31:36.624678] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:04.138 [2024-07-15 12:31:36.624797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62741 ] 00:07:04.138 { 00:07:04.138 "subsystems": [ 00:07:04.138 { 00:07:04.138 "subsystem": "bdev", 00:07:04.138 "config": [ 00:07:04.138 { 00:07:04.138 "params": { 00:07:04.138 "trtype": "pcie", 00:07:04.138 "traddr": "0000:00:10.0", 00:07:04.138 "name": "Nvme0" 00:07:04.138 }, 00:07:04.138 "method": "bdev_nvme_attach_controller" 00:07:04.138 }, 00:07:04.138 { 00:07:04.138 "method": "bdev_wait_for_examine" 00:07:04.138 } 00:07:04.138 ] 00:07:04.138 } 00:07:04.138 ] 00:07:04.138 } 00:07:04.138 [2024-07-15 12:31:36.765993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.398 [2024-07-15 12:31:36.895099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.398 [2024-07-15 12:31:36.952949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.655  Copying: 56/56 [kB] (average 54 MBps) 00:07:04.655 00:07:04.655 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:04.655 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:04.655 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.655 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.913 { 00:07:04.913 "subsystems": [ 00:07:04.913 { 00:07:04.913 "subsystem": "bdev", 00:07:04.913 "config": [ 00:07:04.913 { 00:07:04.913 "params": { 00:07:04.913 "trtype": "pcie", 00:07:04.913 "traddr": "0000:00:10.0", 00:07:04.913 "name": "Nvme0" 00:07:04.913 }, 00:07:04.913 "method": "bdev_nvme_attach_controller" 00:07:04.913 }, 00:07:04.913 { 00:07:04.913 "method": "bdev_wait_for_examine" 00:07:04.913 } 00:07:04.913 ] 00:07:04.913 } 00:07:04.913 ] 00:07:04.913 } 00:07:04.913 [2024-07-15 12:31:37.348470] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:04.913 [2024-07-15 12:31:37.348598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62749 ] 00:07:04.913 [2024-07-15 12:31:37.490524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.171 [2024-07-15 12:31:37.607233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.171 [2024-07-15 12:31:37.661748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.431  Copying: 56/56 [kB] (average 27 MBps) 00:07:05.431 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:05.431 12:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.431 [2024-07-15 12:31:38.036616] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:05.431 [2024-07-15 12:31:38.036707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62770 ] 00:07:05.431 { 00:07:05.431 "subsystems": [ 00:07:05.431 { 00:07:05.431 "subsystem": "bdev", 00:07:05.431 "config": [ 00:07:05.431 { 00:07:05.431 "params": { 00:07:05.431 "trtype": "pcie", 00:07:05.431 "traddr": "0000:00:10.0", 00:07:05.431 "name": "Nvme0" 00:07:05.431 }, 00:07:05.431 "method": "bdev_nvme_attach_controller" 00:07:05.431 }, 00:07:05.431 { 00:07:05.431 "method": "bdev_wait_for_examine" 00:07:05.431 } 00:07:05.431 ] 00:07:05.431 } 00:07:05.431 ] 00:07:05.431 } 00:07:05.694 [2024-07-15 12:31:38.170919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.694 [2024-07-15 12:31:38.290044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.694 [2024-07-15 12:31:38.344822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.212  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:06.212 00:07:06.212 12:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:06.212 12:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:06.212 12:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:06.212 12:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:06.212 12:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:06.212 12:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:06.212 12:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.780 12:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:06.780 12:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:06.780 12:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.780 12:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.780 { 00:07:06.780 "subsystems": [ 00:07:06.780 { 00:07:06.780 "subsystem": "bdev", 00:07:06.780 "config": [ 00:07:06.780 { 00:07:06.780 "params": { 00:07:06.780 "trtype": "pcie", 00:07:06.780 "traddr": "0000:00:10.0", 00:07:06.780 "name": "Nvme0" 00:07:06.780 }, 00:07:06.780 "method": "bdev_nvme_attach_controller" 00:07:06.780 }, 00:07:06.780 { 00:07:06.780 "method": "bdev_wait_for_examine" 00:07:06.780 } 00:07:06.780 ] 00:07:06.780 } 00:07:06.780 ] 00:07:06.780 } 00:07:06.780 [2024-07-15 12:31:39.339638] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:06.780 [2024-07-15 12:31:39.339803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62789 ] 00:07:07.039 [2024-07-15 12:31:39.484680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.039 [2024-07-15 12:31:39.602424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.039 [2024-07-15 12:31:39.657449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.305  Copying: 56/56 [kB] (average 54 MBps) 00:07:07.305 00:07:07.564 12:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:07.565 12:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:07.565 12:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:07.565 12:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.565 [2024-07-15 12:31:40.027227] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:07.565 [2024-07-15 12:31:40.027315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62808 ] 00:07:07.565 { 00:07:07.565 "subsystems": [ 00:07:07.565 { 00:07:07.565 "subsystem": "bdev", 00:07:07.565 "config": [ 00:07:07.565 { 00:07:07.565 "params": { 00:07:07.565 "trtype": "pcie", 00:07:07.565 "traddr": "0000:00:10.0", 00:07:07.565 "name": "Nvme0" 00:07:07.565 }, 00:07:07.565 "method": "bdev_nvme_attach_controller" 00:07:07.565 }, 00:07:07.565 { 00:07:07.565 "method": "bdev_wait_for_examine" 00:07:07.565 } 00:07:07.565 ] 00:07:07.565 } 00:07:07.565 ] 00:07:07.565 } 00:07:07.565 [2024-07-15 12:31:40.159722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.824 [2024-07-15 12:31:40.274140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.824 [2024-07-15 12:31:40.328280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.084  Copying: 56/56 [kB] (average 54 MBps) 00:07:08.084 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.084 12:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.084 [2024-07-15 12:31:40.709133] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:08.084 [2024-07-15 12:31:40.709221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62828 ] 00:07:08.084 { 00:07:08.084 "subsystems": [ 00:07:08.084 { 00:07:08.084 "subsystem": "bdev", 00:07:08.084 "config": [ 00:07:08.084 { 00:07:08.084 "params": { 00:07:08.084 "trtype": "pcie", 00:07:08.084 "traddr": "0000:00:10.0", 00:07:08.084 "name": "Nvme0" 00:07:08.084 }, 00:07:08.084 "method": "bdev_nvme_attach_controller" 00:07:08.084 }, 00:07:08.084 { 00:07:08.084 "method": "bdev_wait_for_examine" 00:07:08.084 } 00:07:08.084 ] 00:07:08.084 } 00:07:08.084 ] 00:07:08.084 } 00:07:08.343 [2024-07-15 12:31:40.844985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.343 [2024-07-15 12:31:40.963270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.343 [2024-07-15 12:31:41.017782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.861  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:08.861 00:07:08.861 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:08.861 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:08.861 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:08.861 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:08.861 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:08.861 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:08.861 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:08.861 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.430 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:09.430 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:09.430 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.430 12:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.430 [2024-07-15 12:31:41.914799] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:09.430 [2024-07-15 12:31:41.914882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62848 ] 00:07:09.430 { 00:07:09.430 "subsystems": [ 00:07:09.430 { 00:07:09.430 "subsystem": "bdev", 00:07:09.430 "config": [ 00:07:09.430 { 00:07:09.430 "params": { 00:07:09.430 "trtype": "pcie", 00:07:09.430 "traddr": "0000:00:10.0", 00:07:09.430 "name": "Nvme0" 00:07:09.430 }, 00:07:09.430 "method": "bdev_nvme_attach_controller" 00:07:09.430 }, 00:07:09.430 { 00:07:09.430 "method": "bdev_wait_for_examine" 00:07:09.430 } 00:07:09.430 ] 00:07:09.430 } 00:07:09.430 ] 00:07:09.430 } 00:07:09.430 [2024-07-15 12:31:42.047795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.776 [2024-07-15 12:31:42.163877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.776 [2024-07-15 12:31:42.219669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.035  Copying: 48/48 [kB] (average 46 MBps) 00:07:10.035 00:07:10.035 12:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:10.035 12:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:10.035 12:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.035 12:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.035 { 00:07:10.035 "subsystems": [ 00:07:10.035 { 00:07:10.035 "subsystem": "bdev", 00:07:10.035 "config": [ 00:07:10.035 { 00:07:10.035 "params": { 00:07:10.035 "trtype": "pcie", 00:07:10.035 "traddr": "0000:00:10.0", 00:07:10.035 "name": "Nvme0" 00:07:10.035 }, 00:07:10.035 "method": "bdev_nvme_attach_controller" 00:07:10.035 }, 00:07:10.035 { 00:07:10.035 "method": "bdev_wait_for_examine" 00:07:10.035 } 00:07:10.035 ] 00:07:10.035 } 00:07:10.035 ] 00:07:10.035 } 00:07:10.035 [2024-07-15 12:31:42.611642] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:10.035 [2024-07-15 12:31:42.611797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62862 ] 00:07:10.293 [2024-07-15 12:31:42.755823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.293 [2024-07-15 12:31:42.870974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.293 [2024-07-15 12:31:42.926061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.809  Copying: 48/48 [kB] (average 46 MBps) 00:07:10.809 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.809 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.809 [2024-07-15 12:31:43.303070] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:10.809 [2024-07-15 12:31:43.303156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62877 ] 00:07:10.809 { 00:07:10.809 "subsystems": [ 00:07:10.809 { 00:07:10.809 "subsystem": "bdev", 00:07:10.809 "config": [ 00:07:10.809 { 00:07:10.809 "params": { 00:07:10.809 "trtype": "pcie", 00:07:10.809 "traddr": "0000:00:10.0", 00:07:10.809 "name": "Nvme0" 00:07:10.809 }, 00:07:10.809 "method": "bdev_nvme_attach_controller" 00:07:10.809 }, 00:07:10.809 { 00:07:10.809 "method": "bdev_wait_for_examine" 00:07:10.809 } 00:07:10.809 ] 00:07:10.809 } 00:07:10.809 ] 00:07:10.809 } 00:07:10.809 [2024-07-15 12:31:43.438138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.066 [2024-07-15 12:31:43.553782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.066 [2024-07-15 12:31:43.609304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.323  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:11.323 00:07:11.323 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:11.323 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:11.323 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:11.323 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:11.323 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:11.323 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:11.323 12:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.888 12:31:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:11.888 12:31:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:11.888 12:31:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.888 12:31:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.888 { 00:07:11.888 "subsystems": [ 00:07:11.888 { 00:07:11.888 "subsystem": "bdev", 00:07:11.888 "config": [ 00:07:11.888 { 00:07:11.888 "params": { 00:07:11.888 "trtype": "pcie", 00:07:11.888 "traddr": "0000:00:10.0", 00:07:11.888 "name": "Nvme0" 00:07:11.888 }, 00:07:11.888 "method": "bdev_nvme_attach_controller" 00:07:11.888 }, 00:07:11.888 { 00:07:11.888 "method": "bdev_wait_for_examine" 00:07:11.888 } 00:07:11.888 ] 00:07:11.888 } 00:07:11.888 ] 00:07:11.888 } 00:07:11.888 [2024-07-15 12:31:44.515339] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:11.888 [2024-07-15 12:31:44.515496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62896 ] 00:07:12.146 [2024-07-15 12:31:44.658131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.146 [2024-07-15 12:31:44.772561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.404 [2024-07-15 12:31:44.828062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.662  Copying: 48/48 [kB] (average 46 MBps) 00:07:12.662 00:07:12.662 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:12.662 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:12.662 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.662 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.662 { 00:07:12.662 "subsystems": [ 00:07:12.662 { 00:07:12.662 "subsystem": "bdev", 00:07:12.662 "config": [ 00:07:12.662 { 00:07:12.662 "params": { 00:07:12.662 "trtype": "pcie", 00:07:12.662 "traddr": "0000:00:10.0", 00:07:12.662 "name": "Nvme0" 00:07:12.662 }, 00:07:12.662 "method": "bdev_nvme_attach_controller" 00:07:12.662 }, 00:07:12.662 { 00:07:12.662 "method": "bdev_wait_for_examine" 00:07:12.662 } 00:07:12.662 ] 00:07:12.662 } 00:07:12.662 ] 00:07:12.662 } 00:07:12.662 [2024-07-15 12:31:45.221840] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:12.662 [2024-07-15 12:31:45.221969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62915 ] 00:07:12.921 [2024-07-15 12:31:45.365000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.921 [2024-07-15 12:31:45.480227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.921 [2024-07-15 12:31:45.536111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.438  Copying: 48/48 [kB] (average 46 MBps) 00:07:13.438 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.438 12:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.438 [2024-07-15 12:31:45.919191] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:13.438 [2024-07-15 12:31:45.919273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62931 ] 00:07:13.438 { 00:07:13.438 "subsystems": [ 00:07:13.438 { 00:07:13.438 "subsystem": "bdev", 00:07:13.438 "config": [ 00:07:13.438 { 00:07:13.438 "params": { 00:07:13.438 "trtype": "pcie", 00:07:13.438 "traddr": "0000:00:10.0", 00:07:13.438 "name": "Nvme0" 00:07:13.438 }, 00:07:13.438 "method": "bdev_nvme_attach_controller" 00:07:13.438 }, 00:07:13.438 { 00:07:13.438 "method": "bdev_wait_for_examine" 00:07:13.438 } 00:07:13.438 ] 00:07:13.438 } 00:07:13.438 ] 00:07:13.438 } 00:07:13.438 [2024-07-15 12:31:46.053842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.696 [2024-07-15 12:31:46.168129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.696 [2024-07-15 12:31:46.223749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.955  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:13.955 00:07:13.955 00:07:13.955 real 0m16.235s 00:07:13.955 user 0m12.059s 00:07:13.955 sys 0m5.696s 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.955 ************************************ 00:07:13.955 END TEST dd_rw 00:07:13.955 ************************************ 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.955 ************************************ 00:07:13.955 START TEST dd_rw_offset 00:07:13.955 ************************************ 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:13.955 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:14.240 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:14.241 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=vskk8ht1rw9r1r7d8muno6ip08pjo8t3v3f671pjf7gu0fi2c7oknsfq03cyoomoms1fac5rk0su6j8bke61t5xb34wcoho3dh7oqy3plmlz9270bnhkc84400268t5k0j8fl0rkqgll947dj091942f0ri3pw6lwle8w1mqt83ycy4o1leycnjz685kd9r8uyp35hwuofgles7jsqhefrlpqtdom0gteqm28kntf4u8ctrxiog8necmpz7z33gd4dfoli0ybaphusn85513yhae7c80dqjvvugwxyc7ipuq2ra7kcuq251l1cxawhp4c4luzvhevlvkreqtb9z0kqut4fmimrtecg10cqebmx6weupravtv6x5o9amxis784fwl90z0kms08emythiso1inyxcgr5sprej1ieptogeh5ahtnjwnyoal3hcrttge1idcx39g4hg6znwe6v2yvp0yacvm3ut5cm81hjaii55vrglg4vf3mivvxh4upi9xnvcofdp1xofhi8ptkdoqgfk7lrsnwochlq5xjstai54f3v87zt3scis4b4av6n7wteaionhf9qujd5742n3spy698nldndcvp7gq2m2k4rzciirne0hka7f1x8fccsffslyq9msbljsg2trrmhm6kmx5g8qb60z3spvb1mxhkxr9kwh3qqxz7966gxlxo7zwmauq5qlu8xoqb0txzlnfhhfz7esf2hh5hh0gkj384py1plizazyw8tqrqzf071a4de2l04dqitprhzb3mfe99mo5hzfknn3ru3hzct0eiuwbxalv8ujsmw84rzcksdn9t2tbp6glcl9zh7q0i9tlds5v3tnn6s70xl2z2yhfyom2wkwy5xhx25g2ne2kyvke4gi5as1s7eyd14fwpkdadnn04apinszkej9gls1lheikkiree3v99pb2nh6diz7es6bvt22y3sa086vsnsbzcddkfearto35v4uqlttkmwdeyk4y95anfjhxon56fmx96s59znn956b73xqbmiymjgpfli1vi1clcz3du62hprf48z4ffjahs0xe4w163kakx6lj2czjcjhhckys42rwg7ivj7zt3okq0hk2ixcrcrpw9q50u0nxaag7dsbrqx0c76audaforp90ggbaxl6uvsefomglav96z4izm4ydrryk08ry899jfechjxta0hc40fhacz6jsc8ghaeqlqnhcaj1n801zr0yvwrtbq9x15mu15jnuotek7ka9m03giw4xe2bl959y3u6oqoya3rypdc6lloymttwvhytzf7e2pn3fs699cr1zv79m2rahal5cu2x60wtzjfdxj2o6l3upy7x66el1b8v02c40ux712u73yvjfsf5kb5dc2yi8ne59a6k1300vhscu8dh1kw3r3lzfsp7zvyakwm8828ujenqgzifm1i172g09fcv1f6b07a8bwofiloedswwwjwurxviwogma5yu4gpmr07jar1oyu8adzd2r3hdk97xid097fzeznurpz78khsaeggy4x63do0aw60ut5unbi12o19uu79fz1216fe9stgopjywjmtvvz582irdlf8ehnjstqkjpf8a0g4elwrfhuix5xuilhc2uwmdw0ii326ejzk0spwdiv4yeh2j71r5i1wo1yz14ha3j6mse70hv0ilahqyqh6irebbuifwq9bn2c3xttt65sppwsh3pceoi6j8d5a95fbsjjs8i584bp5av45jt0ne3mimr18yhgipwhotj5at2r3o9r2wppk6ed4t23qaqfu1vg8s2xdpoj1e3tuyqrp0q0aw7aaggbrkgnrxuoh3siiexyv9r7aeowitficwgnapse0zzfffibbjdcp0wnrm1o5gjn7btcyi78arqho9labbyp9c9v2jx1cl5o7h4tzrhaq0ncdoxgwqubpd0t4p0ne7ijttg37o1aattdof8edb2ggq2majtkh8kdlv2adgoj0zzn4qhpze3qy0c5inbv0h338clawz4ormkxj4wch7195blsnlng5779xd50pt3pxsdphaoowp0sj2o2927cey0inx318ekeruulp3xk6b6u73scb42i6k1s5moyq55eb5k5k50ckac1hzk6uj9fx5j4d9kwnllm21ub5usjbs61nnvw44xwq61l9w82xcflgu8q5pdjo4vrwj4bh3c7z7h2ygqxvyje0euddrl2wzoontbdh6xvzsbl0fl0y37zstdl1j41dfv7f1zo2ta32k1p288xwwbdy6qxzx8ln09dwqrwg5dd9mvk6otol6bzmbajrpn2m4ijymfo5km7wolg8uh4l6oewsvflskrs2dz88hr8cp0fvigmibwyxdclruegntrjzir9dj8cqaj09x0knb0o02027isyotoog4gev6sf6o13jpctay5op5nzh08k9khxwvto5uojt30ir6yhcqdkt67y4qqbkn1rsqqrptvcu6jfkoqtl79opsoslh0ypklikwcorsw8gdiuwoilmwddpgvypcdahm6qu97p95us94zcdjsqh4p08f1v6k823ri3phfvk1f64o2obxy9hm632ojhige859cvlpjif3o45c31h2d3mxt2hj1ltfpi8q2ws56241nvdz3iwceg846ep49bu81fi8kk0hp8yjtww1a27apu5zbpa8iy6rd7fj28qnwlbbt4ukm4owyd6u61ugygfwajnk3llb8vrivzvw964ei7ojwv78s902o7pek7zlolaax71fht18ckbe546q9xckay2fn0qge0ybhqwjtazty5daujw4nmoz1la0y0x398516g9mepp1hvgm5w1qp77p4c6wmpfm5jum06s0yzq13awuwvktav9c11mpfw3xhmxiedccqyswk1svjpfmxl02mjrd7q41bou7yh3lma6fcbe92070yfvt01cla7emvphdsx855bx280ve4je0y08ptes8ksnndfvftnh1b6acvu6wer0x9moznl405opuu290kic30n3ha82jpip2djc1li4t03wrlphgr1cm0uy8fntote0jjfy7l6ognevo1tatctlhnj1ps5ggfjv4ctqssjzrgd9ucuregx0cpf7r024asqj3ke1trfi7fxl6qjwkku3kc9e0or2n2wibgrjmcri2rg84amz51r15aair8ka9cdt82csg2ypi5vvpvi88l5pwdszxmcpm1nbtqdzt4zu19g0dy0b3l7dkq21cttskvphq0r75meqr71s51dyr67yl5umus9akl7a87lxg48pllrq532m9fx68xz1brzbx6r7gaginfruo7jnb4rn3caukm5mzvgyq6rdrfdlsyi5tsj2izbbhsh34bki41mlkyr2yj3vguukgkcxxwl9dvyd71myz28o01vjvbx6u86uzgvbygxuo7kwxacr2v9y8prpydsx8h41sstvvzzwl4g0aojly0qcbifhwhhn7mpk2zkrrxs2bcivlptyidqpg309mmh9wv3qzuiq77jzm2tob0uh6yc88l4j25nnlgb5kk7r8xda0dw84o8hlj7d8r0zwrmw15vhjr5rmuhgmxe1zy23wqlbncp1ehggjokpcn1hrhnonmorrbfpk2g7taf3lh6efaqxl9gczntu1sbiscog8xk3kkfwfo54hedu59pkahhd3mixtkl7e6bcc1r2vzsgztavpia3cm5g660zgiy825jh6t8jkj6eyb41h3wtl6swo019johm3nqxcvtc7hxzukvmckjcw5a8qrh51qofq59w7cas04c8m6yaicafqxvcdodcd1h59zkc4ts8sc3a9qb81bqcrdck5ojnb49hdqiau9zz0abgbmbxta0f2mpox7mxeqy98u2kpl3uckkcpvnk28bjdpemw0r1zaulbis1ktbflic5uvwxi4vtjpwusedt2dgqhn1zrtfmegcpwznrqlccgdw480lcdzeudbymwdabawvf8j4wt4jj9jvruygbvipo035r2a5szlcdxmuw7i4t4yyonz7oi1k0lyky6pwge73t 00:07:14.241 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:14.241 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:14.241 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:14.241 12:31:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:14.241 [2024-07-15 12:31:46.709034] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:14.241 [2024-07-15 12:31:46.709136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62967 ] 00:07:14.241 { 00:07:14.241 "subsystems": [ 00:07:14.241 { 00:07:14.241 "subsystem": "bdev", 00:07:14.241 "config": [ 00:07:14.241 { 00:07:14.241 "params": { 00:07:14.241 "trtype": "pcie", 00:07:14.241 "traddr": "0000:00:10.0", 00:07:14.241 "name": "Nvme0" 00:07:14.241 }, 00:07:14.241 "method": "bdev_nvme_attach_controller" 00:07:14.241 }, 00:07:14.241 { 00:07:14.241 "method": "bdev_wait_for_examine" 00:07:14.241 } 00:07:14.241 ] 00:07:14.241 } 00:07:14.241 ] 00:07:14.241 } 00:07:14.241 [2024-07-15 12:31:46.840316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.500 [2024-07-15 12:31:46.947638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.500 [2024-07-15 12:31:47.005706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.760  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:14.760 00:07:14.760 12:31:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:14.760 12:31:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:14.760 12:31:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:14.760 12:31:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:14.760 [2024-07-15 12:31:47.377446] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:14.760 [2024-07-15 12:31:47.377541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62980 ] 00:07:14.760 { 00:07:14.760 "subsystems": [ 00:07:14.760 { 00:07:14.760 "subsystem": "bdev", 00:07:14.760 "config": [ 00:07:14.760 { 00:07:14.760 "params": { 00:07:14.760 "trtype": "pcie", 00:07:14.760 "traddr": "0000:00:10.0", 00:07:14.760 "name": "Nvme0" 00:07:14.760 }, 00:07:14.760 "method": "bdev_nvme_attach_controller" 00:07:14.760 }, 00:07:14.760 { 00:07:14.760 "method": "bdev_wait_for_examine" 00:07:14.760 } 00:07:14.760 ] 00:07:14.760 } 00:07:14.760 ] 00:07:14.760 } 00:07:15.020 [2024-07-15 12:31:47.506225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.020 [2024-07-15 12:31:47.615036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.020 [2024-07-15 12:31:47.689230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.539  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:15.539 00:07:15.539 12:31:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ vskk8ht1rw9r1r7d8muno6ip08pjo8t3v3f671pjf7gu0fi2c7oknsfq03cyoomoms1fac5rk0su6j8bke61t5xb34wcoho3dh7oqy3plmlz9270bnhkc84400268t5k0j8fl0rkqgll947dj091942f0ri3pw6lwle8w1mqt83ycy4o1leycnjz685kd9r8uyp35hwuofgles7jsqhefrlpqtdom0gteqm28kntf4u8ctrxiog8necmpz7z33gd4dfoli0ybaphusn85513yhae7c80dqjvvugwxyc7ipuq2ra7kcuq251l1cxawhp4c4luzvhevlvkreqtb9z0kqut4fmimrtecg10cqebmx6weupravtv6x5o9amxis784fwl90z0kms08emythiso1inyxcgr5sprej1ieptogeh5ahtnjwnyoal3hcrttge1idcx39g4hg6znwe6v2yvp0yacvm3ut5cm81hjaii55vrglg4vf3mivvxh4upi9xnvcofdp1xofhi8ptkdoqgfk7lrsnwochlq5xjstai54f3v87zt3scis4b4av6n7wteaionhf9qujd5742n3spy698nldndcvp7gq2m2k4rzciirne0hka7f1x8fccsffslyq9msbljsg2trrmhm6kmx5g8qb60z3spvb1mxhkxr9kwh3qqxz7966gxlxo7zwmauq5qlu8xoqb0txzlnfhhfz7esf2hh5hh0gkj384py1plizazyw8tqrqzf071a4de2l04dqitprhzb3mfe99mo5hzfknn3ru3hzct0eiuwbxalv8ujsmw84rzcksdn9t2tbp6glcl9zh7q0i9tlds5v3tnn6s70xl2z2yhfyom2wkwy5xhx25g2ne2kyvke4gi5as1s7eyd14fwpkdadnn04apinszkej9gls1lheikkiree3v99pb2nh6diz7es6bvt22y3sa086vsnsbzcddkfearto35v4uqlttkmwdeyk4y95anfjhxon56fmx96s59znn956b73xqbmiymjgpfli1vi1clcz3du62hprf48z4ffjahs0xe4w163kakx6lj2czjcjhhckys42rwg7ivj7zt3okq0hk2ixcrcrpw9q50u0nxaag7dsbrqx0c76audaforp90ggbaxl6uvsefomglav96z4izm4ydrryk08ry899jfechjxta0hc40fhacz6jsc8ghaeqlqnhcaj1n801zr0yvwrtbq9x15mu15jnuotek7ka9m03giw4xe2bl959y3u6oqoya3rypdc6lloymttwvhytzf7e2pn3fs699cr1zv79m2rahal5cu2x60wtzjfdxj2o6l3upy7x66el1b8v02c40ux712u73yvjfsf5kb5dc2yi8ne59a6k1300vhscu8dh1kw3r3lzfsp7zvyakwm8828ujenqgzifm1i172g09fcv1f6b07a8bwofiloedswwwjwurxviwogma5yu4gpmr07jar1oyu8adzd2r3hdk97xid097fzeznurpz78khsaeggy4x63do0aw60ut5unbi12o19uu79fz1216fe9stgopjywjmtvvz582irdlf8ehnjstqkjpf8a0g4elwrfhuix5xuilhc2uwmdw0ii326ejzk0spwdiv4yeh2j71r5i1wo1yz14ha3j6mse70hv0ilahqyqh6irebbuifwq9bn2c3xttt65sppwsh3pceoi6j8d5a95fbsjjs8i584bp5av45jt0ne3mimr18yhgipwhotj5at2r3o9r2wppk6ed4t23qaqfu1vg8s2xdpoj1e3tuyqrp0q0aw7aaggbrkgnrxuoh3siiexyv9r7aeowitficwgnapse0zzfffibbjdcp0wnrm1o5gjn7btcyi78arqho9labbyp9c9v2jx1cl5o7h4tzrhaq0ncdoxgwqubpd0t4p0ne7ijttg37o1aattdof8edb2ggq2majtkh8kdlv2adgoj0zzn4qhpze3qy0c5inbv0h338clawz4ormkxj4wch7195blsnlng5779xd50pt3pxsdphaoowp0sj2o2927cey0inx318ekeruulp3xk6b6u73scb42i6k1s5moyq55eb5k5k50ckac1hzk6uj9fx5j4d9kwnllm21ub5usjbs61nnvw44xwq61l9w82xcflgu8q5pdjo4vrwj4bh3c7z7h2ygqxvyje0euddrl2wzoontbdh6xvzsbl0fl0y37zstdl1j41dfv7f1zo2ta32k1p288xwwbdy6qxzx8ln09dwqrwg5dd9mvk6otol6bzmbajrpn2m4ijymfo5km7wolg8uh4l6oewsvflskrs2dz88hr8cp0fvigmibwyxdclruegntrjzir9dj8cqaj09x0knb0o02027isyotoog4gev6sf6o13jpctay5op5nzh08k9khxwvto5uojt30ir6yhcqdkt67y4qqbkn1rsqqrptvcu6jfkoqtl79opsoslh0ypklikwcorsw8gdiuwoilmwddpgvypcdahm6qu97p95us94zcdjsqh4p08f1v6k823ri3phfvk1f64o2obxy9hm632ojhige859cvlpjif3o45c31h2d3mxt2hj1ltfpi8q2ws56241nvdz3iwceg846ep49bu81fi8kk0hp8yjtww1a27apu5zbpa8iy6rd7fj28qnwlbbt4ukm4owyd6u61ugygfwajnk3llb8vrivzvw964ei7ojwv78s902o7pek7zlolaax71fht18ckbe546q9xckay2fn0qge0ybhqwjtazty5daujw4nmoz1la0y0x398516g9mepp1hvgm5w1qp77p4c6wmpfm5jum06s0yzq13awuwvktav9c11mpfw3xhmxiedccqyswk1svjpfmxl02mjrd7q41bou7yh3lma6fcbe92070yfvt01cla7emvphdsx855bx280ve4je0y08ptes8ksnndfvftnh1b6acvu6wer0x9moznl405opuu290kic30n3ha82jpip2djc1li4t03wrlphgr1cm0uy8fntote0jjfy7l6ognevo1tatctlhnj1ps5ggfjv4ctqssjzrgd9ucuregx0cpf7r024asqj3ke1trfi7fxl6qjwkku3kc9e0or2n2wibgrjmcri2rg84amz51r15aair8ka9cdt82csg2ypi5vvpvi88l5pwdszxmcpm1nbtqdzt4zu19g0dy0b3l7dkq21cttskvphq0r75meqr71s51dyr67yl5umus9akl7a87lxg48pllrq532m9fx68xz1brzbx6r7gaginfruo7jnb4rn3caukm5mzvgyq6rdrfdlsyi5tsj2izbbhsh34bki41mlkyr2yj3vguukgkcxxwl9dvyd71myz28o01vjvbx6u86uzgvbygxuo7kwxacr2v9y8prpydsx8h41sstvvzzwl4g0aojly0qcbifhwhhn7mpk2zkrrxs2bcivlptyidqpg309mmh9wv3qzuiq77jzm2tob0uh6yc88l4j25nnlgb5kk7r8xda0dw84o8hlj7d8r0zwrmw15vhjr5rmuhgmxe1zy23wqlbncp1ehggjokpcn1hrhnonmorrbfpk2g7taf3lh6efaqxl9gczntu1sbiscog8xk3kkfwfo54hedu59pkahhd3mixtkl7e6bcc1r2vzsgztavpia3cm5g660zgiy825jh6t8jkj6eyb41h3wtl6swo019johm3nqxcvtc7hxzukvmckjcw5a8qrh51qofq59w7cas04c8m6yaicafqxvcdodcd1h59zkc4ts8sc3a9qb81bqcrdck5ojnb49hdqiau9zz0abgbmbxta0f2mpox7mxeqy98u2kpl3uckkcpvnk28bjdpemw0r1zaulbis1ktbflic5uvwxi4vtjpwusedt2dgqhn1zrtfmegcpwznrqlccgdw480lcdzeudbymwdabawvf8j4wt4jj9jvruygbvipo035r2a5szlcdxmuw7i4t4yyonz7oi1k0lyky6pwge73t == \v\s\k\k\8\h\t\1\r\w\9\r\1\r\7\d\8\m\u\n\o\6\i\p\0\8\p\j\o\8\t\3\v\3\f\6\7\1\p\j\f\7\g\u\0\f\i\2\c\7\o\k\n\s\f\q\0\3\c\y\o\o\m\o\m\s\1\f\a\c\5\r\k\0\s\u\6\j\8\b\k\e\6\1\t\5\x\b\3\4\w\c\o\h\o\3\d\h\7\o\q\y\3\p\l\m\l\z\9\2\7\0\b\n\h\k\c\8\4\4\0\0\2\6\8\t\5\k\0\j\8\f\l\0\r\k\q\g\l\l\9\4\7\d\j\0\9\1\9\4\2\f\0\r\i\3\p\w\6\l\w\l\e\8\w\1\m\q\t\8\3\y\c\y\4\o\1\l\e\y\c\n\j\z\6\8\5\k\d\9\r\8\u\y\p\3\5\h\w\u\o\f\g\l\e\s\7\j\s\q\h\e\f\r\l\p\q\t\d\o\m\0\g\t\e\q\m\2\8\k\n\t\f\4\u\8\c\t\r\x\i\o\g\8\n\e\c\m\p\z\7\z\3\3\g\d\4\d\f\o\l\i\0\y\b\a\p\h\u\s\n\8\5\5\1\3\y\h\a\e\7\c\8\0\d\q\j\v\v\u\g\w\x\y\c\7\i\p\u\q\2\r\a\7\k\c\u\q\2\5\1\l\1\c\x\a\w\h\p\4\c\4\l\u\z\v\h\e\v\l\v\k\r\e\q\t\b\9\z\0\k\q\u\t\4\f\m\i\m\r\t\e\c\g\1\0\c\q\e\b\m\x\6\w\e\u\p\r\a\v\t\v\6\x\5\o\9\a\m\x\i\s\7\8\4\f\w\l\9\0\z\0\k\m\s\0\8\e\m\y\t\h\i\s\o\1\i\n\y\x\c\g\r\5\s\p\r\e\j\1\i\e\p\t\o\g\e\h\5\a\h\t\n\j\w\n\y\o\a\l\3\h\c\r\t\t\g\e\1\i\d\c\x\3\9\g\4\h\g\6\z\n\w\e\6\v\2\y\v\p\0\y\a\c\v\m\3\u\t\5\c\m\8\1\h\j\a\i\i\5\5\v\r\g\l\g\4\v\f\3\m\i\v\v\x\h\4\u\p\i\9\x\n\v\c\o\f\d\p\1\x\o\f\h\i\8\p\t\k\d\o\q\g\f\k\7\l\r\s\n\w\o\c\h\l\q\5\x\j\s\t\a\i\5\4\f\3\v\8\7\z\t\3\s\c\i\s\4\b\4\a\v\6\n\7\w\t\e\a\i\o\n\h\f\9\q\u\j\d\5\7\4\2\n\3\s\p\y\6\9\8\n\l\d\n\d\c\v\p\7\g\q\2\m\2\k\4\r\z\c\i\i\r\n\e\0\h\k\a\7\f\1\x\8\f\c\c\s\f\f\s\l\y\q\9\m\s\b\l\j\s\g\2\t\r\r\m\h\m\6\k\m\x\5\g\8\q\b\6\0\z\3\s\p\v\b\1\m\x\h\k\x\r\9\k\w\h\3\q\q\x\z\7\9\6\6\g\x\l\x\o\7\z\w\m\a\u\q\5\q\l\u\8\x\o\q\b\0\t\x\z\l\n\f\h\h\f\z\7\e\s\f\2\h\h\5\h\h\0\g\k\j\3\8\4\p\y\1\p\l\i\z\a\z\y\w\8\t\q\r\q\z\f\0\7\1\a\4\d\e\2\l\0\4\d\q\i\t\p\r\h\z\b\3\m\f\e\9\9\m\o\5\h\z\f\k\n\n\3\r\u\3\h\z\c\t\0\e\i\u\w\b\x\a\l\v\8\u\j\s\m\w\8\4\r\z\c\k\s\d\n\9\t\2\t\b\p\6\g\l\c\l\9\z\h\7\q\0\i\9\t\l\d\s\5\v\3\t\n\n\6\s\7\0\x\l\2\z\2\y\h\f\y\o\m\2\w\k\w\y\5\x\h\x\2\5\g\2\n\e\2\k\y\v\k\e\4\g\i\5\a\s\1\s\7\e\y\d\1\4\f\w\p\k\d\a\d\n\n\0\4\a\p\i\n\s\z\k\e\j\9\g\l\s\1\l\h\e\i\k\k\i\r\e\e\3\v\9\9\p\b\2\n\h\6\d\i\z\7\e\s\6\b\v\t\2\2\y\3\s\a\0\8\6\v\s\n\s\b\z\c\d\d\k\f\e\a\r\t\o\3\5\v\4\u\q\l\t\t\k\m\w\d\e\y\k\4\y\9\5\a\n\f\j\h\x\o\n\5\6\f\m\x\9\6\s\5\9\z\n\n\9\5\6\b\7\3\x\q\b\m\i\y\m\j\g\p\f\l\i\1\v\i\1\c\l\c\z\3\d\u\6\2\h\p\r\f\4\8\z\4\f\f\j\a\h\s\0\x\e\4\w\1\6\3\k\a\k\x\6\l\j\2\c\z\j\c\j\h\h\c\k\y\s\4\2\r\w\g\7\i\v\j\7\z\t\3\o\k\q\0\h\k\2\i\x\c\r\c\r\p\w\9\q\5\0\u\0\n\x\a\a\g\7\d\s\b\r\q\x\0\c\7\6\a\u\d\a\f\o\r\p\9\0\g\g\b\a\x\l\6\u\v\s\e\f\o\m\g\l\a\v\9\6\z\4\i\z\m\4\y\d\r\r\y\k\0\8\r\y\8\9\9\j\f\e\c\h\j\x\t\a\0\h\c\4\0\f\h\a\c\z\6\j\s\c\8\g\h\a\e\q\l\q\n\h\c\a\j\1\n\8\0\1\z\r\0\y\v\w\r\t\b\q\9\x\1\5\m\u\1\5\j\n\u\o\t\e\k\7\k\a\9\m\0\3\g\i\w\4\x\e\2\b\l\9\5\9\y\3\u\6\o\q\o\y\a\3\r\y\p\d\c\6\l\l\o\y\m\t\t\w\v\h\y\t\z\f\7\e\2\p\n\3\f\s\6\9\9\c\r\1\z\v\7\9\m\2\r\a\h\a\l\5\c\u\2\x\6\0\w\t\z\j\f\d\x\j\2\o\6\l\3\u\p\y\7\x\6\6\e\l\1\b\8\v\0\2\c\4\0\u\x\7\1\2\u\7\3\y\v\j\f\s\f\5\k\b\5\d\c\2\y\i\8\n\e\5\9\a\6\k\1\3\0\0\v\h\s\c\u\8\d\h\1\k\w\3\r\3\l\z\f\s\p\7\z\v\y\a\k\w\m\8\8\2\8\u\j\e\n\q\g\z\i\f\m\1\i\1\7\2\g\0\9\f\c\v\1\f\6\b\0\7\a\8\b\w\o\f\i\l\o\e\d\s\w\w\w\j\w\u\r\x\v\i\w\o\g\m\a\5\y\u\4\g\p\m\r\0\7\j\a\r\1\o\y\u\8\a\d\z\d\2\r\3\h\d\k\9\7\x\i\d\0\9\7\f\z\e\z\n\u\r\p\z\7\8\k\h\s\a\e\g\g\y\4\x\6\3\d\o\0\a\w\6\0\u\t\5\u\n\b\i\1\2\o\1\9\u\u\7\9\f\z\1\2\1\6\f\e\9\s\t\g\o\p\j\y\w\j\m\t\v\v\z\5\8\2\i\r\d\l\f\8\e\h\n\j\s\t\q\k\j\p\f\8\a\0\g\4\e\l\w\r\f\h\u\i\x\5\x\u\i\l\h\c\2\u\w\m\d\w\0\i\i\3\2\6\e\j\z\k\0\s\p\w\d\i\v\4\y\e\h\2\j\7\1\r\5\i\1\w\o\1\y\z\1\4\h\a\3\j\6\m\s\e\7\0\h\v\0\i\l\a\h\q\y\q\h\6\i\r\e\b\b\u\i\f\w\q\9\b\n\2\c\3\x\t\t\t\6\5\s\p\p\w\s\h\3\p\c\e\o\i\6\j\8\d\5\a\9\5\f\b\s\j\j\s\8\i\5\8\4\b\p\5\a\v\4\5\j\t\0\n\e\3\m\i\m\r\1\8\y\h\g\i\p\w\h\o\t\j\5\a\t\2\r\3\o\9\r\2\w\p\p\k\6\e\d\4\t\2\3\q\a\q\f\u\1\v\g\8\s\2\x\d\p\o\j\1\e\3\t\u\y\q\r\p\0\q\0\a\w\7\a\a\g\g\b\r\k\g\n\r\x\u\o\h\3\s\i\i\e\x\y\v\9\r\7\a\e\o\w\i\t\f\i\c\w\g\n\a\p\s\e\0\z\z\f\f\f\i\b\b\j\d\c\p\0\w\n\r\m\1\o\5\g\j\n\7\b\t\c\y\i\7\8\a\r\q\h\o\9\l\a\b\b\y\p\9\c\9\v\2\j\x\1\c\l\5\o\7\h\4\t\z\r\h\a\q\0\n\c\d\o\x\g\w\q\u\b\p\d\0\t\4\p\0\n\e\7\i\j\t\t\g\3\7\o\1\a\a\t\t\d\o\f\8\e\d\b\2\g\g\q\2\m\a\j\t\k\h\8\k\d\l\v\2\a\d\g\o\j\0\z\z\n\4\q\h\p\z\e\3\q\y\0\c\5\i\n\b\v\0\h\3\3\8\c\l\a\w\z\4\o\r\m\k\x\j\4\w\c\h\7\1\9\5\b\l\s\n\l\n\g\5\7\7\9\x\d\5\0\p\t\3\p\x\s\d\p\h\a\o\o\w\p\0\s\j\2\o\2\9\2\7\c\e\y\0\i\n\x\3\1\8\e\k\e\r\u\u\l\p\3\x\k\6\b\6\u\7\3\s\c\b\4\2\i\6\k\1\s\5\m\o\y\q\5\5\e\b\5\k\5\k\5\0\c\k\a\c\1\h\z\k\6\u\j\9\f\x\5\j\4\d\9\k\w\n\l\l\m\2\1\u\b\5\u\s\j\b\s\6\1\n\n\v\w\4\4\x\w\q\6\1\l\9\w\8\2\x\c\f\l\g\u\8\q\5\p\d\j\o\4\v\r\w\j\4\b\h\3\c\7\z\7\h\2\y\g\q\x\v\y\j\e\0\e\u\d\d\r\l\2\w\z\o\o\n\t\b\d\h\6\x\v\z\s\b\l\0\f\l\0\y\3\7\z\s\t\d\l\1\j\4\1\d\f\v\7\f\1\z\o\2\t\a\3\2\k\1\p\2\8\8\x\w\w\b\d\y\6\q\x\z\x\8\l\n\0\9\d\w\q\r\w\g\5\d\d\9\m\v\k\6\o\t\o\l\6\b\z\m\b\a\j\r\p\n\2\m\4\i\j\y\m\f\o\5\k\m\7\w\o\l\g\8\u\h\4\l\6\o\e\w\s\v\f\l\s\k\r\s\2\d\z\8\8\h\r\8\c\p\0\f\v\i\g\m\i\b\w\y\x\d\c\l\r\u\e\g\n\t\r\j\z\i\r\9\d\j\8\c\q\a\j\0\9\x\0\k\n\b\0\o\0\2\0\2\7\i\s\y\o\t\o\o\g\4\g\e\v\6\s\f\6\o\1\3\j\p\c\t\a\y\5\o\p\5\n\z\h\0\8\k\9\k\h\x\w\v\t\o\5\u\o\j\t\3\0\i\r\6\y\h\c\q\d\k\t\6\7\y\4\q\q\b\k\n\1\r\s\q\q\r\p\t\v\c\u\6\j\f\k\o\q\t\l\7\9\o\p\s\o\s\l\h\0\y\p\k\l\i\k\w\c\o\r\s\w\8\g\d\i\u\w\o\i\l\m\w\d\d\p\g\v\y\p\c\d\a\h\m\6\q\u\9\7\p\9\5\u\s\9\4\z\c\d\j\s\q\h\4\p\0\8\f\1\v\6\k\8\2\3\r\i\3\p\h\f\v\k\1\f\6\4\o\2\o\b\x\y\9\h\m\6\3\2\o\j\h\i\g\e\8\5\9\c\v\l\p\j\i\f\3\o\4\5\c\3\1\h\2\d\3\m\x\t\2\h\j\1\l\t\f\p\i\8\q\2\w\s\5\6\2\4\1\n\v\d\z\3\i\w\c\e\g\8\4\6\e\p\4\9\b\u\8\1\f\i\8\k\k\0\h\p\8\y\j\t\w\w\1\a\2\7\a\p\u\5\z\b\p\a\8\i\y\6\r\d\7\f\j\2\8\q\n\w\l\b\b\t\4\u\k\m\4\o\w\y\d\6\u\6\1\u\g\y\g\f\w\a\j\n\k\3\l\l\b\8\v\r\i\v\z\v\w\9\6\4\e\i\7\o\j\w\v\7\8\s\9\0\2\o\7\p\e\k\7\z\l\o\l\a\a\x\7\1\f\h\t\1\8\c\k\b\e\5\4\6\q\9\x\c\k\a\y\2\f\n\0\q\g\e\0\y\b\h\q\w\j\t\a\z\t\y\5\d\a\u\j\w\4\n\m\o\z\1\l\a\0\y\0\x\3\9\8\5\1\6\g\9\m\e\p\p\1\h\v\g\m\5\w\1\q\p\7\7\p\4\c\6\w\m\p\f\m\5\j\u\m\0\6\s\0\y\z\q\1\3\a\w\u\w\v\k\t\a\v\9\c\1\1\m\p\f\w\3\x\h\m\x\i\e\d\c\c\q\y\s\w\k\1\s\v\j\p\f\m\x\l\0\2\m\j\r\d\7\q\4\1\b\o\u\7\y\h\3\l\m\a\6\f\c\b\e\9\2\0\7\0\y\f\v\t\0\1\c\l\a\7\e\m\v\p\h\d\s\x\8\5\5\b\x\2\8\0\v\e\4\j\e\0\y\0\8\p\t\e\s\8\k\s\n\n\d\f\v\f\t\n\h\1\b\6\a\c\v\u\6\w\e\r\0\x\9\m\o\z\n\l\4\0\5\o\p\u\u\2\9\0\k\i\c\3\0\n\3\h\a\8\2\j\p\i\p\2\d\j\c\1\l\i\4\t\0\3\w\r\l\p\h\g\r\1\c\m\0\u\y\8\f\n\t\o\t\e\0\j\j\f\y\7\l\6\o\g\n\e\v\o\1\t\a\t\c\t\l\h\n\j\1\p\s\5\g\g\f\j\v\4\c\t\q\s\s\j\z\r\g\d\9\u\c\u\r\e\g\x\0\c\p\f\7\r\0\2\4\a\s\q\j\3\k\e\1\t\r\f\i\7\f\x\l\6\q\j\w\k\k\u\3\k\c\9\e\0\o\r\2\n\2\w\i\b\g\r\j\m\c\r\i\2\r\g\8\4\a\m\z\5\1\r\1\5\a\a\i\r\8\k\a\9\c\d\t\8\2\c\s\g\2\y\p\i\5\v\v\p\v\i\8\8\l\5\p\w\d\s\z\x\m\c\p\m\1\n\b\t\q\d\z\t\4\z\u\1\9\g\0\d\y\0\b\3\l\7\d\k\q\2\1\c\t\t\s\k\v\p\h\q\0\r\7\5\m\e\q\r\7\1\s\5\1\d\y\r\6\7\y\l\5\u\m\u\s\9\a\k\l\7\a\8\7\l\x\g\4\8\p\l\l\r\q\5\3\2\m\9\f\x\6\8\x\z\1\b\r\z\b\x\6\r\7\g\a\g\i\n\f\r\u\o\7\j\n\b\4\r\n\3\c\a\u\k\m\5\m\z\v\g\y\q\6\r\d\r\f\d\l\s\y\i\5\t\s\j\2\i\z\b\b\h\s\h\3\4\b\k\i\4\1\m\l\k\y\r\2\y\j\3\v\g\u\u\k\g\k\c\x\x\w\l\9\d\v\y\d\7\1\m\y\z\2\8\o\0\1\v\j\v\b\x\6\u\8\6\u\z\g\v\b\y\g\x\u\o\7\k\w\x\a\c\r\2\v\9\y\8\p\r\p\y\d\s\x\8\h\4\1\s\s\t\v\v\z\z\w\l\4\g\0\a\o\j\l\y\0\q\c\b\i\f\h\w\h\h\n\7\m\p\k\2\z\k\r\r\x\s\2\b\c\i\v\l\p\t\y\i\d\q\p\g\3\0\9\m\m\h\9\w\v\3\q\z\u\i\q\7\7\j\z\m\2\t\o\b\0\u\h\6\y\c\8\8\l\4\j\2\5\n\n\l\g\b\5\k\k\7\r\8\x\d\a\0\d\w\8\4\o\8\h\l\j\7\d\8\r\0\z\w\r\m\w\1\5\v\h\j\r\5\r\m\u\h\g\m\x\e\1\z\y\2\3\w\q\l\b\n\c\p\1\e\h\g\g\j\o\k\p\c\n\1\h\r\h\n\o\n\m\o\r\r\b\f\p\k\2\g\7\t\a\f\3\l\h\6\e\f\a\q\x\l\9\g\c\z\n\t\u\1\s\b\i\s\c\o\g\8\x\k\3\k\k\f\w\f\o\5\4\h\e\d\u\5\9\p\k\a\h\h\d\3\m\i\x\t\k\l\7\e\6\b\c\c\1\r\2\v\z\s\g\z\t\a\v\p\i\a\3\c\m\5\g\6\6\0\z\g\i\y\8\2\5\j\h\6\t\8\j\k\j\6\e\y\b\4\1\h\3\w\t\l\6\s\w\o\0\1\9\j\o\h\m\3\n\q\x\c\v\t\c\7\h\x\z\u\k\v\m\c\k\j\c\w\5\a\8\q\r\h\5\1\q\o\f\q\5\9\w\7\c\a\s\0\4\c\8\m\6\y\a\i\c\a\f\q\x\v\c\d\o\d\c\d\1\h\5\9\z\k\c\4\t\s\8\s\c\3\a\9\q\b\8\1\b\q\c\r\d\c\k\5\o\j\n\b\4\9\h\d\q\i\a\u\9\z\z\0\a\b\g\b\m\b\x\t\a\0\f\2\m\p\o\x\7\m\x\e\q\y\9\8\u\2\k\p\l\3\u\c\k\k\c\p\v\n\k\2\8\b\j\d\p\e\m\w\0\r\1\z\a\u\l\b\i\s\1\k\t\b\f\l\i\c\5\u\v\w\x\i\4\v\t\j\p\w\u\s\e\d\t\2\d\g\q\h\n\1\z\r\t\f\m\e\g\c\p\w\z\n\r\q\l\c\c\g\d\w\4\8\0\l\c\d\z\e\u\d\b\y\m\w\d\a\b\a\w\v\f\8\j\4\w\t\4\j\j\9\j\v\r\u\y\g\b\v\i\p\o\0\3\5\r\2\a\5\s\z\l\c\d\x\m\u\w\7\i\4\t\4\y\y\o\n\z\7\o\i\1\k\0\l\y\k\y\6\p\w\g\e\7\3\t ]] 00:07:15.540 00:07:15.540 real 0m1.421s 00:07:15.540 user 0m0.969s 00:07:15.540 sys 0m0.641s 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:15.540 ************************************ 00:07:15.540 END TEST dd_rw_offset 00:07:15.540 ************************************ 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.540 12:31:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.540 [2024-07-15 12:31:48.125289] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:15.540 [2024-07-15 12:31:48.125397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63015 ] 00:07:15.540 { 00:07:15.540 "subsystems": [ 00:07:15.540 { 00:07:15.540 "subsystem": "bdev", 00:07:15.540 "config": [ 00:07:15.540 { 00:07:15.540 "params": { 00:07:15.540 "trtype": "pcie", 00:07:15.540 "traddr": "0000:00:10.0", 00:07:15.540 "name": "Nvme0" 00:07:15.540 }, 00:07:15.540 "method": "bdev_nvme_attach_controller" 00:07:15.540 }, 00:07:15.540 { 00:07:15.540 "method": "bdev_wait_for_examine" 00:07:15.540 } 00:07:15.540 ] 00:07:15.540 } 00:07:15.540 ] 00:07:15.540 } 00:07:15.799 [2024-07-15 12:31:48.257926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.799 [2024-07-15 12:31:48.354338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.799 [2024-07-15 12:31:48.412676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.058  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:16.058 00:07:16.058 12:31:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.058 00:07:16.058 real 0m19.469s 00:07:16.058 user 0m14.124s 00:07:16.058 sys 0m6.978s 00:07:16.058 12:31:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.058 12:31:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.058 ************************************ 00:07:16.058 END TEST spdk_dd_basic_rw 00:07:16.058 ************************************ 00:07:16.318 12:31:48 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:16.318 12:31:48 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:16.318 12:31:48 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.318 12:31:48 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.318 12:31:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:16.318 ************************************ 00:07:16.318 START TEST spdk_dd_posix 00:07:16.318 ************************************ 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:16.318 * Looking for test storage... 00:07:16.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:16.318 12:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:16.319 * First test run, liburing in use 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:16.319 ************************************ 00:07:16.319 START TEST dd_flag_append 00:07:16.319 ************************************ 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=8zgcef7apwzl2y8tde5jlaqlkiqfhdg1 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=065tlqmd4qrdgby9fvm07kdp63vfcczu 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 8zgcef7apwzl2y8tde5jlaqlkiqfhdg1 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 065tlqmd4qrdgby9fvm07kdp63vfcczu 00:07:16.319 12:31:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:16.319 [2024-07-15 12:31:48.917627] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:16.319 [2024-07-15 12:31:48.917732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63074 ] 00:07:16.578 [2024-07-15 12:31:49.051071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.578 [2024-07-15 12:31:49.135675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.578 [2024-07-15 12:31:49.188761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.870  Copying: 32/32 [B] (average 31 kBps) 00:07:16.870 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 065tlqmd4qrdgby9fvm07kdp63vfcczu8zgcef7apwzl2y8tde5jlaqlkiqfhdg1 == \0\6\5\t\l\q\m\d\4\q\r\d\g\b\y\9\f\v\m\0\7\k\d\p\6\3\v\f\c\c\z\u\8\z\g\c\e\f\7\a\p\w\z\l\2\y\8\t\d\e\5\j\l\a\q\l\k\i\q\f\h\d\g\1 ]] 00:07:16.870 00:07:16.870 real 0m0.554s 00:07:16.870 user 0m0.306s 00:07:16.870 sys 0m0.264s 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:16.870 ************************************ 00:07:16.870 END TEST dd_flag_append 00:07:16.870 ************************************ 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:16.870 ************************************ 00:07:16.870 START TEST dd_flag_directory 00:07:16.870 ************************************ 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.870 12:31:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.132 [2024-07-15 12:31:49.525392] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:17.132 [2024-07-15 12:31:49.525492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63102 ] 00:07:17.132 [2024-07-15 12:31:49.663487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.132 [2024-07-15 12:31:49.771592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.391 [2024-07-15 12:31:49.825518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.391 [2024-07-15 12:31:49.855901] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.391 [2024-07-15 12:31:49.855980] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.391 [2024-07-15 12:31:49.856009] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.391 [2024-07-15 12:31:49.964674] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.391 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.392 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.392 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.392 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.392 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.392 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.392 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:17.651 [2024-07-15 12:31:50.109172] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:17.651 [2024-07-15 12:31:50.109279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63112 ] 00:07:17.651 [2024-07-15 12:31:50.244837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.651 [2024-07-15 12:31:50.321482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.910 [2024-07-15 12:31:50.374105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.910 [2024-07-15 12:31:50.405842] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.910 [2024-07-15 12:31:50.405936] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.910 [2024-07-15 12:31:50.405966] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.910 [2024-07-15 12:31:50.515634] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.170 00:07:18.170 real 0m1.156s 00:07:18.170 user 0m0.651s 00:07:18.170 sys 0m0.296s 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.170 ************************************ 00:07:18.170 END TEST dd_flag_directory 00:07:18.170 ************************************ 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:18.170 ************************************ 00:07:18.170 START TEST dd_flag_nofollow 00:07:18.170 ************************************ 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.170 12:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.170 [2024-07-15 12:31:50.738830] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:18.170 [2024-07-15 12:31:50.738940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63140 ] 00:07:18.429 [2024-07-15 12:31:50.877039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.429 [2024-07-15 12:31:50.990427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.429 [2024-07-15 12:31:51.043399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.429 [2024-07-15 12:31:51.075243] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:18.429 [2024-07-15 12:31:51.075296] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:18.429 [2024-07-15 12:31:51.075312] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.688 [2024-07-15 12:31:51.185605] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.688 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:18.688 [2024-07-15 12:31:51.337239] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:18.688 [2024-07-15 12:31:51.337356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63155 ] 00:07:18.947 [2024-07-15 12:31:51.474775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.947 [2024-07-15 12:31:51.570780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.947 [2024-07-15 12:31:51.626537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.206 [2024-07-15 12:31:51.662141] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:19.206 [2024-07-15 12:31:51.662197] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:19.206 [2024-07-15 12:31:51.662213] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.206 [2024-07-15 12:31:51.776388] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:19.206 12:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.465 [2024-07-15 12:31:51.933679] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:19.465 [2024-07-15 12:31:51.933810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63163 ] 00:07:19.465 [2024-07-15 12:31:52.071702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.724 [2024-07-15 12:31:52.163939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.724 [2024-07-15 12:31:52.215907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.983  Copying: 512/512 [B] (average 500 kBps) 00:07:19.983 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ky1bt0bma8i75zw4hd55parm0wsgwhgnwznm4njodoe1mcfl3t8ffvawfzx2e3ghie1d7f8vz6li3qz32ieg9yyskf9bqpnk1gkd2rldek4aumi2fwkkwx91vu72vi0jav8sxs3c9x433vtdmacn8hi0sso19t63i4ch8084bw0ik1mh1r5ie7qdzjjjyvgvyj0fwf24nlltj7hd2ifnh82a4getg1labhc2y142nawh4qltide26i5pb7j6kzqpkt3buo1nj957wk7pu49mo48diyx7haka16hqi9pxa8b8jrl3wvsvcq9erz6wcts9pjovjlltlkvv43zzayr3ayr3gfnpe415n9cq73xtf81jo8812z8a0dzq435tkhiu4unduiaml4s0hu9pg2wvf9jh69ro7ekgn3obqpk5aj6044pjwzrss2pitdauwzvfe27sti9w4v56jx142tvsz5jn1oqf22p2kcc3i6m79fablxm8xdklb2kjsgq89tit == \k\y\1\b\t\0\b\m\a\8\i\7\5\z\w\4\h\d\5\5\p\a\r\m\0\w\s\g\w\h\g\n\w\z\n\m\4\n\j\o\d\o\e\1\m\c\f\l\3\t\8\f\f\v\a\w\f\z\x\2\e\3\g\h\i\e\1\d\7\f\8\v\z\6\l\i\3\q\z\3\2\i\e\g\9\y\y\s\k\f\9\b\q\p\n\k\1\g\k\d\2\r\l\d\e\k\4\a\u\m\i\2\f\w\k\k\w\x\9\1\v\u\7\2\v\i\0\j\a\v\8\s\x\s\3\c\9\x\4\3\3\v\t\d\m\a\c\n\8\h\i\0\s\s\o\1\9\t\6\3\i\4\c\h\8\0\8\4\b\w\0\i\k\1\m\h\1\r\5\i\e\7\q\d\z\j\j\j\y\v\g\v\y\j\0\f\w\f\2\4\n\l\l\t\j\7\h\d\2\i\f\n\h\8\2\a\4\g\e\t\g\1\l\a\b\h\c\2\y\1\4\2\n\a\w\h\4\q\l\t\i\d\e\2\6\i\5\p\b\7\j\6\k\z\q\p\k\t\3\b\u\o\1\n\j\9\5\7\w\k\7\p\u\4\9\m\o\4\8\d\i\y\x\7\h\a\k\a\1\6\h\q\i\9\p\x\a\8\b\8\j\r\l\3\w\v\s\v\c\q\9\e\r\z\6\w\c\t\s\9\p\j\o\v\j\l\l\t\l\k\v\v\4\3\z\z\a\y\r\3\a\y\r\3\g\f\n\p\e\4\1\5\n\9\c\q\7\3\x\t\f\8\1\j\o\8\8\1\2\z\8\a\0\d\z\q\4\3\5\t\k\h\i\u\4\u\n\d\u\i\a\m\l\4\s\0\h\u\9\p\g\2\w\v\f\9\j\h\6\9\r\o\7\e\k\g\n\3\o\b\q\p\k\5\a\j\6\0\4\4\p\j\w\z\r\s\s\2\p\i\t\d\a\u\w\z\v\f\e\2\7\s\t\i\9\w\4\v\5\6\j\x\1\4\2\t\v\s\z\5\j\n\1\o\q\f\2\2\p\2\k\c\c\3\i\6\m\7\9\f\a\b\l\x\m\8\x\d\k\l\b\2\k\j\s\g\q\8\9\t\i\t ]] 00:07:19.983 00:07:19.983 real 0m1.810s 00:07:19.983 user 0m1.049s 00:07:19.983 sys 0m0.585s 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:19.983 ************************************ 00:07:19.983 END TEST dd_flag_nofollow 00:07:19.983 ************************************ 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:19.983 ************************************ 00:07:19.983 START TEST dd_flag_noatime 00:07:19.983 ************************************ 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721046712 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721046712 00:07:19.983 12:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:20.921 12:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.180 [2024-07-15 12:31:53.623925] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:21.180 [2024-07-15 12:31:53.624028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63205 ] 00:07:21.180 [2024-07-15 12:31:53.765775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.438 [2024-07-15 12:31:53.909429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.438 [2024-07-15 12:31:53.978987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.697  Copying: 512/512 [B] (average 500 kBps) 00:07:21.697 00:07:21.697 12:31:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:21.697 12:31:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721046712 )) 00:07:21.697 12:31:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.697 12:31:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721046712 )) 00:07:21.697 12:31:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.697 [2024-07-15 12:31:54.312915] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:21.697 [2024-07-15 12:31:54.313053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63219 ] 00:07:21.993 [2024-07-15 12:31:54.454679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.993 [2024-07-15 12:31:54.576253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.993 [2024-07-15 12:31:54.635071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.256  Copying: 512/512 [B] (average 500 kBps) 00:07:22.256 00:07:22.256 12:31:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.256 12:31:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721046714 )) 00:07:22.256 00:07:22.256 real 0m2.360s 00:07:22.256 user 0m0.789s 00:07:22.256 sys 0m0.625s 00:07:22.256 12:31:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.256 12:31:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:22.256 ************************************ 00:07:22.256 END TEST dd_flag_noatime 00:07:22.256 ************************************ 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:22.552 ************************************ 00:07:22.552 START TEST dd_flags_misc 00:07:22.552 ************************************ 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:22.552 12:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:22.552 [2024-07-15 12:31:55.022400] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:22.552 [2024-07-15 12:31:55.022500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63253 ] 00:07:22.552 [2024-07-15 12:31:55.163180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.826 [2024-07-15 12:31:55.295016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.826 [2024-07-15 12:31:55.353636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.084  Copying: 512/512 [B] (average 500 kBps) 00:07:23.084 00:07:23.084 12:31:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d8skx1dxkr35khezfw7bvbgfzq9s06yk0p6ee5z04cdags0kb6k4qdk6p7jzsqvbpx3auy4p2jwzwsymiaut5sza7qcz6b4l9ndbqd5180n9qie0zw8tvu7g8b50xvmr53fsls8sok30070s2rhakaq06bolof3um46yiwa2j02tllxhauoxi6gvxtlakeut0jvqfrsjxj3wdpodghkk4gcvomxfufxll2rwj2v6s799rdzmp0tc53qj03rqwl5uzzlmgyqs1at8aoc7ueroylebvckb7ulw6nqmg36235h8s4qdilpzeejaz596g29ttqc05kg1nku56y10doy5ko72gqojaplym4nl9512huccjkda5ae06wirnto9iqmstj9mgwmzjbishe2fyecy0dbdpr0rz9sjjhqfq33uxlbkiksb6n3g8ck2gvncmg7e0d5zajkugqqtp8dhx62i41y0dzr5f3zg1ishvaaplu6q1h59en6pqvb0botkh7gf == \d\8\s\k\x\1\d\x\k\r\3\5\k\h\e\z\f\w\7\b\v\b\g\f\z\q\9\s\0\6\y\k\0\p\6\e\e\5\z\0\4\c\d\a\g\s\0\k\b\6\k\4\q\d\k\6\p\7\j\z\s\q\v\b\p\x\3\a\u\y\4\p\2\j\w\z\w\s\y\m\i\a\u\t\5\s\z\a\7\q\c\z\6\b\4\l\9\n\d\b\q\d\5\1\8\0\n\9\q\i\e\0\z\w\8\t\v\u\7\g\8\b\5\0\x\v\m\r\5\3\f\s\l\s\8\s\o\k\3\0\0\7\0\s\2\r\h\a\k\a\q\0\6\b\o\l\o\f\3\u\m\4\6\y\i\w\a\2\j\0\2\t\l\l\x\h\a\u\o\x\i\6\g\v\x\t\l\a\k\e\u\t\0\j\v\q\f\r\s\j\x\j\3\w\d\p\o\d\g\h\k\k\4\g\c\v\o\m\x\f\u\f\x\l\l\2\r\w\j\2\v\6\s\7\9\9\r\d\z\m\p\0\t\c\5\3\q\j\0\3\r\q\w\l\5\u\z\z\l\m\g\y\q\s\1\a\t\8\a\o\c\7\u\e\r\o\y\l\e\b\v\c\k\b\7\u\l\w\6\n\q\m\g\3\6\2\3\5\h\8\s\4\q\d\i\l\p\z\e\e\j\a\z\5\9\6\g\2\9\t\t\q\c\0\5\k\g\1\n\k\u\5\6\y\1\0\d\o\y\5\k\o\7\2\g\q\o\j\a\p\l\y\m\4\n\l\9\5\1\2\h\u\c\c\j\k\d\a\5\a\e\0\6\w\i\r\n\t\o\9\i\q\m\s\t\j\9\m\g\w\m\z\j\b\i\s\h\e\2\f\y\e\c\y\0\d\b\d\p\r\0\r\z\9\s\j\j\h\q\f\q\3\3\u\x\l\b\k\i\k\s\b\6\n\3\g\8\c\k\2\g\v\n\c\m\g\7\e\0\d\5\z\a\j\k\u\g\q\q\t\p\8\d\h\x\6\2\i\4\1\y\0\d\z\r\5\f\3\z\g\1\i\s\h\v\a\a\p\l\u\6\q\1\h\5\9\e\n\6\p\q\v\b\0\b\o\t\k\h\7\g\f ]] 00:07:23.084 12:31:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.084 12:31:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:23.084 [2024-07-15 12:31:55.659013] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:23.084 [2024-07-15 12:31:55.659122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63262 ] 00:07:23.344 [2024-07-15 12:31:55.796850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.344 [2024-07-15 12:31:55.914420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.344 [2024-07-15 12:31:55.973089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.603  Copying: 512/512 [B] (average 500 kBps) 00:07:23.603 00:07:23.603 12:31:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d8skx1dxkr35khezfw7bvbgfzq9s06yk0p6ee5z04cdags0kb6k4qdk6p7jzsqvbpx3auy4p2jwzwsymiaut5sza7qcz6b4l9ndbqd5180n9qie0zw8tvu7g8b50xvmr53fsls8sok30070s2rhakaq06bolof3um46yiwa2j02tllxhauoxi6gvxtlakeut0jvqfrsjxj3wdpodghkk4gcvomxfufxll2rwj2v6s799rdzmp0tc53qj03rqwl5uzzlmgyqs1at8aoc7ueroylebvckb7ulw6nqmg36235h8s4qdilpzeejaz596g29ttqc05kg1nku56y10doy5ko72gqojaplym4nl9512huccjkda5ae06wirnto9iqmstj9mgwmzjbishe2fyecy0dbdpr0rz9sjjhqfq33uxlbkiksb6n3g8ck2gvncmg7e0d5zajkugqqtp8dhx62i41y0dzr5f3zg1ishvaaplu6q1h59en6pqvb0botkh7gf == \d\8\s\k\x\1\d\x\k\r\3\5\k\h\e\z\f\w\7\b\v\b\g\f\z\q\9\s\0\6\y\k\0\p\6\e\e\5\z\0\4\c\d\a\g\s\0\k\b\6\k\4\q\d\k\6\p\7\j\z\s\q\v\b\p\x\3\a\u\y\4\p\2\j\w\z\w\s\y\m\i\a\u\t\5\s\z\a\7\q\c\z\6\b\4\l\9\n\d\b\q\d\5\1\8\0\n\9\q\i\e\0\z\w\8\t\v\u\7\g\8\b\5\0\x\v\m\r\5\3\f\s\l\s\8\s\o\k\3\0\0\7\0\s\2\r\h\a\k\a\q\0\6\b\o\l\o\f\3\u\m\4\6\y\i\w\a\2\j\0\2\t\l\l\x\h\a\u\o\x\i\6\g\v\x\t\l\a\k\e\u\t\0\j\v\q\f\r\s\j\x\j\3\w\d\p\o\d\g\h\k\k\4\g\c\v\o\m\x\f\u\f\x\l\l\2\r\w\j\2\v\6\s\7\9\9\r\d\z\m\p\0\t\c\5\3\q\j\0\3\r\q\w\l\5\u\z\z\l\m\g\y\q\s\1\a\t\8\a\o\c\7\u\e\r\o\y\l\e\b\v\c\k\b\7\u\l\w\6\n\q\m\g\3\6\2\3\5\h\8\s\4\q\d\i\l\p\z\e\e\j\a\z\5\9\6\g\2\9\t\t\q\c\0\5\k\g\1\n\k\u\5\6\y\1\0\d\o\y\5\k\o\7\2\g\q\o\j\a\p\l\y\m\4\n\l\9\5\1\2\h\u\c\c\j\k\d\a\5\a\e\0\6\w\i\r\n\t\o\9\i\q\m\s\t\j\9\m\g\w\m\z\j\b\i\s\h\e\2\f\y\e\c\y\0\d\b\d\p\r\0\r\z\9\s\j\j\h\q\f\q\3\3\u\x\l\b\k\i\k\s\b\6\n\3\g\8\c\k\2\g\v\n\c\m\g\7\e\0\d\5\z\a\j\k\u\g\q\q\t\p\8\d\h\x\6\2\i\4\1\y\0\d\z\r\5\f\3\z\g\1\i\s\h\v\a\a\p\l\u\6\q\1\h\5\9\e\n\6\p\q\v\b\0\b\o\t\k\h\7\g\f ]] 00:07:23.603 12:31:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.603 12:31:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:23.603 [2024-07-15 12:31:56.281488] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:23.603 [2024-07-15 12:31:56.281585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63272 ] 00:07:23.862 [2024-07-15 12:31:56.418332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.862 [2024-07-15 12:31:56.517165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.120 [2024-07-15 12:31:56.576871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.379  Copying: 512/512 [B] (average 125 kBps) 00:07:24.379 00:07:24.379 12:31:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d8skx1dxkr35khezfw7bvbgfzq9s06yk0p6ee5z04cdags0kb6k4qdk6p7jzsqvbpx3auy4p2jwzwsymiaut5sza7qcz6b4l9ndbqd5180n9qie0zw8tvu7g8b50xvmr53fsls8sok30070s2rhakaq06bolof3um46yiwa2j02tllxhauoxi6gvxtlakeut0jvqfrsjxj3wdpodghkk4gcvomxfufxll2rwj2v6s799rdzmp0tc53qj03rqwl5uzzlmgyqs1at8aoc7ueroylebvckb7ulw6nqmg36235h8s4qdilpzeejaz596g29ttqc05kg1nku56y10doy5ko72gqojaplym4nl9512huccjkda5ae06wirnto9iqmstj9mgwmzjbishe2fyecy0dbdpr0rz9sjjhqfq33uxlbkiksb6n3g8ck2gvncmg7e0d5zajkugqqtp8dhx62i41y0dzr5f3zg1ishvaaplu6q1h59en6pqvb0botkh7gf == \d\8\s\k\x\1\d\x\k\r\3\5\k\h\e\z\f\w\7\b\v\b\g\f\z\q\9\s\0\6\y\k\0\p\6\e\e\5\z\0\4\c\d\a\g\s\0\k\b\6\k\4\q\d\k\6\p\7\j\z\s\q\v\b\p\x\3\a\u\y\4\p\2\j\w\z\w\s\y\m\i\a\u\t\5\s\z\a\7\q\c\z\6\b\4\l\9\n\d\b\q\d\5\1\8\0\n\9\q\i\e\0\z\w\8\t\v\u\7\g\8\b\5\0\x\v\m\r\5\3\f\s\l\s\8\s\o\k\3\0\0\7\0\s\2\r\h\a\k\a\q\0\6\b\o\l\o\f\3\u\m\4\6\y\i\w\a\2\j\0\2\t\l\l\x\h\a\u\o\x\i\6\g\v\x\t\l\a\k\e\u\t\0\j\v\q\f\r\s\j\x\j\3\w\d\p\o\d\g\h\k\k\4\g\c\v\o\m\x\f\u\f\x\l\l\2\r\w\j\2\v\6\s\7\9\9\r\d\z\m\p\0\t\c\5\3\q\j\0\3\r\q\w\l\5\u\z\z\l\m\g\y\q\s\1\a\t\8\a\o\c\7\u\e\r\o\y\l\e\b\v\c\k\b\7\u\l\w\6\n\q\m\g\3\6\2\3\5\h\8\s\4\q\d\i\l\p\z\e\e\j\a\z\5\9\6\g\2\9\t\t\q\c\0\5\k\g\1\n\k\u\5\6\y\1\0\d\o\y\5\k\o\7\2\g\q\o\j\a\p\l\y\m\4\n\l\9\5\1\2\h\u\c\c\j\k\d\a\5\a\e\0\6\w\i\r\n\t\o\9\i\q\m\s\t\j\9\m\g\w\m\z\j\b\i\s\h\e\2\f\y\e\c\y\0\d\b\d\p\r\0\r\z\9\s\j\j\h\q\f\q\3\3\u\x\l\b\k\i\k\s\b\6\n\3\g\8\c\k\2\g\v\n\c\m\g\7\e\0\d\5\z\a\j\k\u\g\q\q\t\p\8\d\h\x\6\2\i\4\1\y\0\d\z\r\5\f\3\z\g\1\i\s\h\v\a\a\p\l\u\6\q\1\h\5\9\e\n\6\p\q\v\b\0\b\o\t\k\h\7\g\f ]] 00:07:24.379 12:31:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:24.379 12:31:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:24.379 [2024-07-15 12:31:56.867383] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:24.379 [2024-07-15 12:31:56.867464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63281 ] 00:07:24.379 [2024-07-15 12:31:57.000784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.639 [2024-07-15 12:31:57.113757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.639 [2024-07-15 12:31:57.171895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.898  Copying: 512/512 [B] (average 250 kBps) 00:07:24.898 00:07:24.898 12:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d8skx1dxkr35khezfw7bvbgfzq9s06yk0p6ee5z04cdags0kb6k4qdk6p7jzsqvbpx3auy4p2jwzwsymiaut5sza7qcz6b4l9ndbqd5180n9qie0zw8tvu7g8b50xvmr53fsls8sok30070s2rhakaq06bolof3um46yiwa2j02tllxhauoxi6gvxtlakeut0jvqfrsjxj3wdpodghkk4gcvomxfufxll2rwj2v6s799rdzmp0tc53qj03rqwl5uzzlmgyqs1at8aoc7ueroylebvckb7ulw6nqmg36235h8s4qdilpzeejaz596g29ttqc05kg1nku56y10doy5ko72gqojaplym4nl9512huccjkda5ae06wirnto9iqmstj9mgwmzjbishe2fyecy0dbdpr0rz9sjjhqfq33uxlbkiksb6n3g8ck2gvncmg7e0d5zajkugqqtp8dhx62i41y0dzr5f3zg1ishvaaplu6q1h59en6pqvb0botkh7gf == \d\8\s\k\x\1\d\x\k\r\3\5\k\h\e\z\f\w\7\b\v\b\g\f\z\q\9\s\0\6\y\k\0\p\6\e\e\5\z\0\4\c\d\a\g\s\0\k\b\6\k\4\q\d\k\6\p\7\j\z\s\q\v\b\p\x\3\a\u\y\4\p\2\j\w\z\w\s\y\m\i\a\u\t\5\s\z\a\7\q\c\z\6\b\4\l\9\n\d\b\q\d\5\1\8\0\n\9\q\i\e\0\z\w\8\t\v\u\7\g\8\b\5\0\x\v\m\r\5\3\f\s\l\s\8\s\o\k\3\0\0\7\0\s\2\r\h\a\k\a\q\0\6\b\o\l\o\f\3\u\m\4\6\y\i\w\a\2\j\0\2\t\l\l\x\h\a\u\o\x\i\6\g\v\x\t\l\a\k\e\u\t\0\j\v\q\f\r\s\j\x\j\3\w\d\p\o\d\g\h\k\k\4\g\c\v\o\m\x\f\u\f\x\l\l\2\r\w\j\2\v\6\s\7\9\9\r\d\z\m\p\0\t\c\5\3\q\j\0\3\r\q\w\l\5\u\z\z\l\m\g\y\q\s\1\a\t\8\a\o\c\7\u\e\r\o\y\l\e\b\v\c\k\b\7\u\l\w\6\n\q\m\g\3\6\2\3\5\h\8\s\4\q\d\i\l\p\z\e\e\j\a\z\5\9\6\g\2\9\t\t\q\c\0\5\k\g\1\n\k\u\5\6\y\1\0\d\o\y\5\k\o\7\2\g\q\o\j\a\p\l\y\m\4\n\l\9\5\1\2\h\u\c\c\j\k\d\a\5\a\e\0\6\w\i\r\n\t\o\9\i\q\m\s\t\j\9\m\g\w\m\z\j\b\i\s\h\e\2\f\y\e\c\y\0\d\b\d\p\r\0\r\z\9\s\j\j\h\q\f\q\3\3\u\x\l\b\k\i\k\s\b\6\n\3\g\8\c\k\2\g\v\n\c\m\g\7\e\0\d\5\z\a\j\k\u\g\q\q\t\p\8\d\h\x\6\2\i\4\1\y\0\d\z\r\5\f\3\z\g\1\i\s\h\v\a\a\p\l\u\6\q\1\h\5\9\e\n\6\p\q\v\b\0\b\o\t\k\h\7\g\f ]] 00:07:24.898 12:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:24.898 12:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:24.898 12:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:24.898 12:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:24.898 12:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:24.898 12:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:24.898 [2024-07-15 12:31:57.479323] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:24.898 [2024-07-15 12:31:57.479608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63291 ] 00:07:25.157 [2024-07-15 12:31:57.615085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.157 [2024-07-15 12:31:57.718383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.157 [2024-07-15 12:31:57.773795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.417  Copying: 512/512 [B] (average 500 kBps) 00:07:25.417 00:07:25.417 12:31:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e8czsxzrgly24l3wxk7o4iassssfsnnx63njwxz87ocxj7p1ptftw91hu68ohp3dg6955xhcq47wlr9mxo1ypqybhepynb25mlqfbkacymt0rh0ghb89ryauwhc5udj7acyb7wc3vz1q7nf0ql5p7e90z9fdd64713uzg5dxjgdlggfqzvncm81gjtevfym4x3vjhh0y13n8n2mdl584ix80fxe6rw9fjbyk356do5p0uzoa1jeq5i95d47xfn2xxyxev2ranuf5009ykf5fnm7kz4a9aiqhs255g37v3x8a513ge4zio8armbdznzlacps82vuv7c6izo7cgegagxp8jskdeago202il7rnojedeiuu33fp9jxztpvd3aeq8x1oxoe69hwbyiyusmrgvs8nci9la1katvdak32ckqdktqtvi6tlmdafgfjxfupv4p8rno88iov57a7kmrlt59oasjrsc384c98x5zxlw14uxkm27pf6rnw5614mik75 == \e\8\c\z\s\x\z\r\g\l\y\2\4\l\3\w\x\k\7\o\4\i\a\s\s\s\s\f\s\n\n\x\6\3\n\j\w\x\z\8\7\o\c\x\j\7\p\1\p\t\f\t\w\9\1\h\u\6\8\o\h\p\3\d\g\6\9\5\5\x\h\c\q\4\7\w\l\r\9\m\x\o\1\y\p\q\y\b\h\e\p\y\n\b\2\5\m\l\q\f\b\k\a\c\y\m\t\0\r\h\0\g\h\b\8\9\r\y\a\u\w\h\c\5\u\d\j\7\a\c\y\b\7\w\c\3\v\z\1\q\7\n\f\0\q\l\5\p\7\e\9\0\z\9\f\d\d\6\4\7\1\3\u\z\g\5\d\x\j\g\d\l\g\g\f\q\z\v\n\c\m\8\1\g\j\t\e\v\f\y\m\4\x\3\v\j\h\h\0\y\1\3\n\8\n\2\m\d\l\5\8\4\i\x\8\0\f\x\e\6\r\w\9\f\j\b\y\k\3\5\6\d\o\5\p\0\u\z\o\a\1\j\e\q\5\i\9\5\d\4\7\x\f\n\2\x\x\y\x\e\v\2\r\a\n\u\f\5\0\0\9\y\k\f\5\f\n\m\7\k\z\4\a\9\a\i\q\h\s\2\5\5\g\3\7\v\3\x\8\a\5\1\3\g\e\4\z\i\o\8\a\r\m\b\d\z\n\z\l\a\c\p\s\8\2\v\u\v\7\c\6\i\z\o\7\c\g\e\g\a\g\x\p\8\j\s\k\d\e\a\g\o\2\0\2\i\l\7\r\n\o\j\e\d\e\i\u\u\3\3\f\p\9\j\x\z\t\p\v\d\3\a\e\q\8\x\1\o\x\o\e\6\9\h\w\b\y\i\y\u\s\m\r\g\v\s\8\n\c\i\9\l\a\1\k\a\t\v\d\a\k\3\2\c\k\q\d\k\t\q\t\v\i\6\t\l\m\d\a\f\g\f\j\x\f\u\p\v\4\p\8\r\n\o\8\8\i\o\v\5\7\a\7\k\m\r\l\t\5\9\o\a\s\j\r\s\c\3\8\4\c\9\8\x\5\z\x\l\w\1\4\u\x\k\m\2\7\p\f\6\r\n\w\5\6\1\4\m\i\k\7\5 ]] 00:07:25.417 12:31:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.417 12:31:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:25.417 [2024-07-15 12:31:58.083694] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:25.417 [2024-07-15 12:31:58.084025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63306 ] 00:07:25.676 [2024-07-15 12:31:58.217374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.676 [2024-07-15 12:31:58.323689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.935 [2024-07-15 12:31:58.381925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.193  Copying: 512/512 [B] (average 500 kBps) 00:07:26.193 00:07:26.193 12:31:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e8czsxzrgly24l3wxk7o4iassssfsnnx63njwxz87ocxj7p1ptftw91hu68ohp3dg6955xhcq47wlr9mxo1ypqybhepynb25mlqfbkacymt0rh0ghb89ryauwhc5udj7acyb7wc3vz1q7nf0ql5p7e90z9fdd64713uzg5dxjgdlggfqzvncm81gjtevfym4x3vjhh0y13n8n2mdl584ix80fxe6rw9fjbyk356do5p0uzoa1jeq5i95d47xfn2xxyxev2ranuf5009ykf5fnm7kz4a9aiqhs255g37v3x8a513ge4zio8armbdznzlacps82vuv7c6izo7cgegagxp8jskdeago202il7rnojedeiuu33fp9jxztpvd3aeq8x1oxoe69hwbyiyusmrgvs8nci9la1katvdak32ckqdktqtvi6tlmdafgfjxfupv4p8rno88iov57a7kmrlt59oasjrsc384c98x5zxlw14uxkm27pf6rnw5614mik75 == \e\8\c\z\s\x\z\r\g\l\y\2\4\l\3\w\x\k\7\o\4\i\a\s\s\s\s\f\s\n\n\x\6\3\n\j\w\x\z\8\7\o\c\x\j\7\p\1\p\t\f\t\w\9\1\h\u\6\8\o\h\p\3\d\g\6\9\5\5\x\h\c\q\4\7\w\l\r\9\m\x\o\1\y\p\q\y\b\h\e\p\y\n\b\2\5\m\l\q\f\b\k\a\c\y\m\t\0\r\h\0\g\h\b\8\9\r\y\a\u\w\h\c\5\u\d\j\7\a\c\y\b\7\w\c\3\v\z\1\q\7\n\f\0\q\l\5\p\7\e\9\0\z\9\f\d\d\6\4\7\1\3\u\z\g\5\d\x\j\g\d\l\g\g\f\q\z\v\n\c\m\8\1\g\j\t\e\v\f\y\m\4\x\3\v\j\h\h\0\y\1\3\n\8\n\2\m\d\l\5\8\4\i\x\8\0\f\x\e\6\r\w\9\f\j\b\y\k\3\5\6\d\o\5\p\0\u\z\o\a\1\j\e\q\5\i\9\5\d\4\7\x\f\n\2\x\x\y\x\e\v\2\r\a\n\u\f\5\0\0\9\y\k\f\5\f\n\m\7\k\z\4\a\9\a\i\q\h\s\2\5\5\g\3\7\v\3\x\8\a\5\1\3\g\e\4\z\i\o\8\a\r\m\b\d\z\n\z\l\a\c\p\s\8\2\v\u\v\7\c\6\i\z\o\7\c\g\e\g\a\g\x\p\8\j\s\k\d\e\a\g\o\2\0\2\i\l\7\r\n\o\j\e\d\e\i\u\u\3\3\f\p\9\j\x\z\t\p\v\d\3\a\e\q\8\x\1\o\x\o\e\6\9\h\w\b\y\i\y\u\s\m\r\g\v\s\8\n\c\i\9\l\a\1\k\a\t\v\d\a\k\3\2\c\k\q\d\k\t\q\t\v\i\6\t\l\m\d\a\f\g\f\j\x\f\u\p\v\4\p\8\r\n\o\8\8\i\o\v\5\7\a\7\k\m\r\l\t\5\9\o\a\s\j\r\s\c\3\8\4\c\9\8\x\5\z\x\l\w\1\4\u\x\k\m\2\7\p\f\6\r\n\w\5\6\1\4\m\i\k\7\5 ]] 00:07:26.193 12:31:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.193 12:31:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:26.193 [2024-07-15 12:31:58.691758] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:26.193 [2024-07-15 12:31:58.691863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63314 ] 00:07:26.193 [2024-07-15 12:31:58.833138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.452 [2024-07-15 12:31:58.957358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.452 [2024-07-15 12:31:59.016664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.712  Copying: 512/512 [B] (average 250 kBps) 00:07:26.712 00:07:26.712 12:31:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e8czsxzrgly24l3wxk7o4iassssfsnnx63njwxz87ocxj7p1ptftw91hu68ohp3dg6955xhcq47wlr9mxo1ypqybhepynb25mlqfbkacymt0rh0ghb89ryauwhc5udj7acyb7wc3vz1q7nf0ql5p7e90z9fdd64713uzg5dxjgdlggfqzvncm81gjtevfym4x3vjhh0y13n8n2mdl584ix80fxe6rw9fjbyk356do5p0uzoa1jeq5i95d47xfn2xxyxev2ranuf5009ykf5fnm7kz4a9aiqhs255g37v3x8a513ge4zio8armbdznzlacps82vuv7c6izo7cgegagxp8jskdeago202il7rnojedeiuu33fp9jxztpvd3aeq8x1oxoe69hwbyiyusmrgvs8nci9la1katvdak32ckqdktqtvi6tlmdafgfjxfupv4p8rno88iov57a7kmrlt59oasjrsc384c98x5zxlw14uxkm27pf6rnw5614mik75 == \e\8\c\z\s\x\z\r\g\l\y\2\4\l\3\w\x\k\7\o\4\i\a\s\s\s\s\f\s\n\n\x\6\3\n\j\w\x\z\8\7\o\c\x\j\7\p\1\p\t\f\t\w\9\1\h\u\6\8\o\h\p\3\d\g\6\9\5\5\x\h\c\q\4\7\w\l\r\9\m\x\o\1\y\p\q\y\b\h\e\p\y\n\b\2\5\m\l\q\f\b\k\a\c\y\m\t\0\r\h\0\g\h\b\8\9\r\y\a\u\w\h\c\5\u\d\j\7\a\c\y\b\7\w\c\3\v\z\1\q\7\n\f\0\q\l\5\p\7\e\9\0\z\9\f\d\d\6\4\7\1\3\u\z\g\5\d\x\j\g\d\l\g\g\f\q\z\v\n\c\m\8\1\g\j\t\e\v\f\y\m\4\x\3\v\j\h\h\0\y\1\3\n\8\n\2\m\d\l\5\8\4\i\x\8\0\f\x\e\6\r\w\9\f\j\b\y\k\3\5\6\d\o\5\p\0\u\z\o\a\1\j\e\q\5\i\9\5\d\4\7\x\f\n\2\x\x\y\x\e\v\2\r\a\n\u\f\5\0\0\9\y\k\f\5\f\n\m\7\k\z\4\a\9\a\i\q\h\s\2\5\5\g\3\7\v\3\x\8\a\5\1\3\g\e\4\z\i\o\8\a\r\m\b\d\z\n\z\l\a\c\p\s\8\2\v\u\v\7\c\6\i\z\o\7\c\g\e\g\a\g\x\p\8\j\s\k\d\e\a\g\o\2\0\2\i\l\7\r\n\o\j\e\d\e\i\u\u\3\3\f\p\9\j\x\z\t\p\v\d\3\a\e\q\8\x\1\o\x\o\e\6\9\h\w\b\y\i\y\u\s\m\r\g\v\s\8\n\c\i\9\l\a\1\k\a\t\v\d\a\k\3\2\c\k\q\d\k\t\q\t\v\i\6\t\l\m\d\a\f\g\f\j\x\f\u\p\v\4\p\8\r\n\o\8\8\i\o\v\5\7\a\7\k\m\r\l\t\5\9\o\a\s\j\r\s\c\3\8\4\c\9\8\x\5\z\x\l\w\1\4\u\x\k\m\2\7\p\f\6\r\n\w\5\6\1\4\m\i\k\7\5 ]] 00:07:26.712 12:31:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.712 12:31:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:26.712 [2024-07-15 12:31:59.321870] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:26.712 [2024-07-15 12:31:59.321964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63325 ] 00:07:26.971 [2024-07-15 12:31:59.458512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.971 [2024-07-15 12:31:59.559351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.971 [2024-07-15 12:31:59.619271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.229  Copying: 512/512 [B] (average 250 kBps) 00:07:27.229 00:07:27.229 ************************************ 00:07:27.229 END TEST dd_flags_misc 00:07:27.229 ************************************ 00:07:27.230 12:31:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e8czsxzrgly24l3wxk7o4iassssfsnnx63njwxz87ocxj7p1ptftw91hu68ohp3dg6955xhcq47wlr9mxo1ypqybhepynb25mlqfbkacymt0rh0ghb89ryauwhc5udj7acyb7wc3vz1q7nf0ql5p7e90z9fdd64713uzg5dxjgdlggfqzvncm81gjtevfym4x3vjhh0y13n8n2mdl584ix80fxe6rw9fjbyk356do5p0uzoa1jeq5i95d47xfn2xxyxev2ranuf5009ykf5fnm7kz4a9aiqhs255g37v3x8a513ge4zio8armbdznzlacps82vuv7c6izo7cgegagxp8jskdeago202il7rnojedeiuu33fp9jxztpvd3aeq8x1oxoe69hwbyiyusmrgvs8nci9la1katvdak32ckqdktqtvi6tlmdafgfjxfupv4p8rno88iov57a7kmrlt59oasjrsc384c98x5zxlw14uxkm27pf6rnw5614mik75 == \e\8\c\z\s\x\z\r\g\l\y\2\4\l\3\w\x\k\7\o\4\i\a\s\s\s\s\f\s\n\n\x\6\3\n\j\w\x\z\8\7\o\c\x\j\7\p\1\p\t\f\t\w\9\1\h\u\6\8\o\h\p\3\d\g\6\9\5\5\x\h\c\q\4\7\w\l\r\9\m\x\o\1\y\p\q\y\b\h\e\p\y\n\b\2\5\m\l\q\f\b\k\a\c\y\m\t\0\r\h\0\g\h\b\8\9\r\y\a\u\w\h\c\5\u\d\j\7\a\c\y\b\7\w\c\3\v\z\1\q\7\n\f\0\q\l\5\p\7\e\9\0\z\9\f\d\d\6\4\7\1\3\u\z\g\5\d\x\j\g\d\l\g\g\f\q\z\v\n\c\m\8\1\g\j\t\e\v\f\y\m\4\x\3\v\j\h\h\0\y\1\3\n\8\n\2\m\d\l\5\8\4\i\x\8\0\f\x\e\6\r\w\9\f\j\b\y\k\3\5\6\d\o\5\p\0\u\z\o\a\1\j\e\q\5\i\9\5\d\4\7\x\f\n\2\x\x\y\x\e\v\2\r\a\n\u\f\5\0\0\9\y\k\f\5\f\n\m\7\k\z\4\a\9\a\i\q\h\s\2\5\5\g\3\7\v\3\x\8\a\5\1\3\g\e\4\z\i\o\8\a\r\m\b\d\z\n\z\l\a\c\p\s\8\2\v\u\v\7\c\6\i\z\o\7\c\g\e\g\a\g\x\p\8\j\s\k\d\e\a\g\o\2\0\2\i\l\7\r\n\o\j\e\d\e\i\u\u\3\3\f\p\9\j\x\z\t\p\v\d\3\a\e\q\8\x\1\o\x\o\e\6\9\h\w\b\y\i\y\u\s\m\r\g\v\s\8\n\c\i\9\l\a\1\k\a\t\v\d\a\k\3\2\c\k\q\d\k\t\q\t\v\i\6\t\l\m\d\a\f\g\f\j\x\f\u\p\v\4\p\8\r\n\o\8\8\i\o\v\5\7\a\7\k\m\r\l\t\5\9\o\a\s\j\r\s\c\3\8\4\c\9\8\x\5\z\x\l\w\1\4\u\x\k\m\2\7\p\f\6\r\n\w\5\6\1\4\m\i\k\7\5 ]] 00:07:27.230 00:07:27.230 real 0m4.909s 00:07:27.230 user 0m2.783s 00:07:27.230 sys 0m2.326s 00:07:27.230 12:31:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.230 12:31:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:27.230 12:31:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:27.230 12:31:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:27.230 12:31:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:27.230 * Second test run, disabling liburing, forcing AIO 00:07:27.230 12:31:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:27.489 ************************************ 00:07:27.489 START TEST dd_flag_append_forced_aio 00:07:27.489 ************************************ 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=jrff9x26fhnfuynopq38ad37vu3pxgzv 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=msfewecviyqr3n1cxi9g68moj0kwiepq 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s jrff9x26fhnfuynopq38ad37vu3pxgzv 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s msfewecviyqr3n1cxi9g68moj0kwiepq 00:07:27.489 12:31:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:27.489 [2024-07-15 12:31:59.980892] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:27.489 [2024-07-15 12:31:59.980986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63359 ] 00:07:27.489 [2024-07-15 12:32:00.123872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.747 [2024-07-15 12:32:00.241416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.747 [2024-07-15 12:32:00.297778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.005  Copying: 32/32 [B] (average 31 kBps) 00:07:28.005 00:07:28.005 ************************************ 00:07:28.005 END TEST dd_flag_append_forced_aio 00:07:28.005 ************************************ 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ msfewecviyqr3n1cxi9g68moj0kwiepqjrff9x26fhnfuynopq38ad37vu3pxgzv == \m\s\f\e\w\e\c\v\i\y\q\r\3\n\1\c\x\i\9\g\6\8\m\o\j\0\k\w\i\e\p\q\j\r\f\f\9\x\2\6\f\h\n\f\u\y\n\o\p\q\3\8\a\d\3\7\v\u\3\p\x\g\z\v ]] 00:07:28.005 00:07:28.005 real 0m0.657s 00:07:28.005 user 0m0.375s 00:07:28.005 sys 0m0.159s 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.005 ************************************ 00:07:28.005 START TEST dd_flag_directory_forced_aio 00:07:28.005 ************************************ 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.005 12:32:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.263 [2024-07-15 12:32:00.690211] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:28.263 [2024-07-15 12:32:00.690332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63380 ] 00:07:28.263 [2024-07-15 12:32:00.834813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.521 [2024-07-15 12:32:00.987917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.521 [2024-07-15 12:32:01.046717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.521 [2024-07-15 12:32:01.079865] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:28.521 [2024-07-15 12:32:01.079917] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:28.521 [2024-07-15 12:32:01.079947] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.521 [2024-07-15 12:32:01.191203] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.779 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:28.779 [2024-07-15 12:32:01.322517] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:28.779 [2024-07-15 12:32:01.322632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63395 ] 00:07:28.779 [2024-07-15 12:32:01.456564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.037 [2024-07-15 12:32:01.575950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.037 [2024-07-15 12:32:01.637155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.037 [2024-07-15 12:32:01.673837] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.037 [2024-07-15 12:32:01.673893] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.037 [2024-07-15 12:32:01.673910] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.295 [2024-07-15 12:32:01.801419] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.295 00:07:29.295 real 0m1.278s 00:07:29.295 user 0m0.748s 00:07:29.295 sys 0m0.318s 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:29.295 ************************************ 00:07:29.295 END TEST dd_flag_directory_forced_aio 00:07:29.295 ************************************ 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:29.295 ************************************ 00:07:29.295 START TEST dd_flag_nofollow_forced_aio 00:07:29.295 ************************************ 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.295 12:32:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.552 [2024-07-15 12:32:02.018816] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:29.553 [2024-07-15 12:32:02.018930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63429 ] 00:07:29.553 [2024-07-15 12:32:02.154064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.811 [2024-07-15 12:32:02.270274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.811 [2024-07-15 12:32:02.328417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.811 [2024-07-15 12:32:02.363987] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:29.811 [2024-07-15 12:32:02.364057] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:29.811 [2024-07-15 12:32:02.364088] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.811 [2024-07-15 12:32:02.481256] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.069 12:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:30.069 [2024-07-15 12:32:02.643685] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:30.069 [2024-07-15 12:32:02.643797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63433 ] 00:07:30.326 [2024-07-15 12:32:02.780108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.326 [2024-07-15 12:32:02.897910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.326 [2024-07-15 12:32:02.956680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.326 [2024-07-15 12:32:02.994435] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:30.326 [2024-07-15 12:32:02.994501] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:30.326 [2024-07-15 12:32:02.994534] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.583 [2024-07-15 12:32:03.115176] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:30.583 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.583 [2024-07-15 12:32:03.264227] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:30.583 [2024-07-15 12:32:03.264322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63446 ] 00:07:30.841 [2024-07-15 12:32:03.395649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.841 [2024-07-15 12:32:03.507611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.099 [2024-07-15 12:32:03.567398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.358  Copying: 512/512 [B] (average 500 kBps) 00:07:31.358 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ cviel824bsbazfsx7q6w2tkpmeluw9a1368m2ue91ib26g75sjmelp62qacioetx0tss3urgsk397rj2gutj14z4q2q3zv7kkq9rbc77nu4kjb2xt2a9h7cex3fis6fnsdawy51weesckji3lebt38rwbimsqutuni3s53c7wyla8ciajwvlluukngkuskwibsqqsl1s9jn5joygjbdwkny751ztt5gm0qlqmt0ns3kqpl7rqrw8hepiijyvj7lv0c5m4k19zufltvqjd7brq3f91mgyat7nz43ih48o1ybrmbiukeq1llkgiroe74n81hk65x8lgnzrj3sgqtq68ksq6smzpy1g3u35y041b548vaxukaoy3tm29lkba7o6n2st9yo7fjcmwjr9fpwe3idycxa14jlnqbt45ilmwwkb51vphkaizcjj6cwwt56azi43bpup76f9jwyluqnhc7bpqk5nq6xgs2orkxc0zo2tkt5xld0r0jmvyubg656t == \c\v\i\e\l\8\2\4\b\s\b\a\z\f\s\x\7\q\6\w\2\t\k\p\m\e\l\u\w\9\a\1\3\6\8\m\2\u\e\9\1\i\b\2\6\g\7\5\s\j\m\e\l\p\6\2\q\a\c\i\o\e\t\x\0\t\s\s\3\u\r\g\s\k\3\9\7\r\j\2\g\u\t\j\1\4\z\4\q\2\q\3\z\v\7\k\k\q\9\r\b\c\7\7\n\u\4\k\j\b\2\x\t\2\a\9\h\7\c\e\x\3\f\i\s\6\f\n\s\d\a\w\y\5\1\w\e\e\s\c\k\j\i\3\l\e\b\t\3\8\r\w\b\i\m\s\q\u\t\u\n\i\3\s\5\3\c\7\w\y\l\a\8\c\i\a\j\w\v\l\l\u\u\k\n\g\k\u\s\k\w\i\b\s\q\q\s\l\1\s\9\j\n\5\j\o\y\g\j\b\d\w\k\n\y\7\5\1\z\t\t\5\g\m\0\q\l\q\m\t\0\n\s\3\k\q\p\l\7\r\q\r\w\8\h\e\p\i\i\j\y\v\j\7\l\v\0\c\5\m\4\k\1\9\z\u\f\l\t\v\q\j\d\7\b\r\q\3\f\9\1\m\g\y\a\t\7\n\z\4\3\i\h\4\8\o\1\y\b\r\m\b\i\u\k\e\q\1\l\l\k\g\i\r\o\e\7\4\n\8\1\h\k\6\5\x\8\l\g\n\z\r\j\3\s\g\q\t\q\6\8\k\s\q\6\s\m\z\p\y\1\g\3\u\3\5\y\0\4\1\b\5\4\8\v\a\x\u\k\a\o\y\3\t\m\2\9\l\k\b\a\7\o\6\n\2\s\t\9\y\o\7\f\j\c\m\w\j\r\9\f\p\w\e\3\i\d\y\c\x\a\1\4\j\l\n\q\b\t\4\5\i\l\m\w\w\k\b\5\1\v\p\h\k\a\i\z\c\j\j\6\c\w\w\t\5\6\a\z\i\4\3\b\p\u\p\7\6\f\9\j\w\y\l\u\q\n\h\c\7\b\p\q\k\5\n\q\6\x\g\s\2\o\r\k\x\c\0\z\o\2\t\k\t\5\x\l\d\0\r\0\j\m\v\y\u\b\g\6\5\6\t ]] 00:07:31.358 00:07:31.358 real 0m1.884s 00:07:31.358 user 0m1.075s 00:07:31.358 sys 0m0.479s 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:31.358 ************************************ 00:07:31.358 END TEST dd_flag_nofollow_forced_aio 00:07:31.358 ************************************ 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:31.358 ************************************ 00:07:31.358 START TEST dd_flag_noatime_forced_aio 00:07:31.358 ************************************ 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721046723 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721046723 00:07:31.358 12:32:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:32.290 12:32:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.290 [2024-07-15 12:32:04.966172] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:32.290 [2024-07-15 12:32:04.966306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63482 ] 00:07:32.548 [2024-07-15 12:32:05.101536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.548 [2024-07-15 12:32:05.200768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.807 [2024-07-15 12:32:05.257989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.066  Copying: 512/512 [B] (average 500 kBps) 00:07:33.066 00:07:33.066 12:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.066 12:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721046723 )) 00:07:33.066 12:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.066 12:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721046723 )) 00:07:33.066 12:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.066 [2024-07-15 12:32:05.589881] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:33.066 [2024-07-15 12:32:05.590012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63499 ] 00:07:33.066 [2024-07-15 12:32:05.728746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.323 [2024-07-15 12:32:05.840510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.323 [2024-07-15 12:32:05.899449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.583  Copying: 512/512 [B] (average 500 kBps) 00:07:33.583 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721046725 )) 00:07:33.583 00:07:33.583 real 0m2.295s 00:07:33.583 user 0m0.719s 00:07:33.583 sys 0m0.332s 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:33.583 ************************************ 00:07:33.583 END TEST dd_flag_noatime_forced_aio 00:07:33.583 ************************************ 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:33.583 ************************************ 00:07:33.583 START TEST dd_flags_misc_forced_aio 00:07:33.583 ************************************ 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:33.583 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:33.842 [2024-07-15 12:32:06.301672] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:33.842 [2024-07-15 12:32:06.301792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63531 ] 00:07:33.842 [2024-07-15 12:32:06.441882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.100 [2024-07-15 12:32:06.554026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.100 [2024-07-15 12:32:06.613840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.359  Copying: 512/512 [B] (average 500 kBps) 00:07:34.359 00:07:34.359 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9nzva1m9rqiwqiniohaxjmwiljzg5a752uujcvsdhk5h6wyr845yys4a7601n8c4hpqi8n2hcb3swsb1n6eyyyiknntiqpqys6chzc75btg10m1hm9fhzq79ob9amnq68e2es3btcr5bvubvqs31hkef7zzg22xigmzw2j5ivnrgat15ube4gll95k1da6we4b23pyrla57ybwdjh1z04nr4el1h7nqzhe1m9slimxqr9b26fzygmiqyu9nrpqm0td6cex7scbpvh2725n2usbwk7gwbr06g9f6bqlmzosumwiw86fd953kiuyjuven86rrxtgwyefhjuhsis4dcjopjubsq1vifc44ejp4fj6zu0okfo2in29dibbuw0df9ayxexh4p5chuzxbxwmz9ij2290e75jy3n4lrij2q2vxpcul55cs8824xyc409ixbz33nj30rzyz3eh0enmoondq71rqkhm8embfv09le6riovssghd6nayjll51wwf2u == \9\n\z\v\a\1\m\9\r\q\i\w\q\i\n\i\o\h\a\x\j\m\w\i\l\j\z\g\5\a\7\5\2\u\u\j\c\v\s\d\h\k\5\h\6\w\y\r\8\4\5\y\y\s\4\a\7\6\0\1\n\8\c\4\h\p\q\i\8\n\2\h\c\b\3\s\w\s\b\1\n\6\e\y\y\y\i\k\n\n\t\i\q\p\q\y\s\6\c\h\z\c\7\5\b\t\g\1\0\m\1\h\m\9\f\h\z\q\7\9\o\b\9\a\m\n\q\6\8\e\2\e\s\3\b\t\c\r\5\b\v\u\b\v\q\s\3\1\h\k\e\f\7\z\z\g\2\2\x\i\g\m\z\w\2\j\5\i\v\n\r\g\a\t\1\5\u\b\e\4\g\l\l\9\5\k\1\d\a\6\w\e\4\b\2\3\p\y\r\l\a\5\7\y\b\w\d\j\h\1\z\0\4\n\r\4\e\l\1\h\7\n\q\z\h\e\1\m\9\s\l\i\m\x\q\r\9\b\2\6\f\z\y\g\m\i\q\y\u\9\n\r\p\q\m\0\t\d\6\c\e\x\7\s\c\b\p\v\h\2\7\2\5\n\2\u\s\b\w\k\7\g\w\b\r\0\6\g\9\f\6\b\q\l\m\z\o\s\u\m\w\i\w\8\6\f\d\9\5\3\k\i\u\y\j\u\v\e\n\8\6\r\r\x\t\g\w\y\e\f\h\j\u\h\s\i\s\4\d\c\j\o\p\j\u\b\s\q\1\v\i\f\c\4\4\e\j\p\4\f\j\6\z\u\0\o\k\f\o\2\i\n\2\9\d\i\b\b\u\w\0\d\f\9\a\y\x\e\x\h\4\p\5\c\h\u\z\x\b\x\w\m\z\9\i\j\2\2\9\0\e\7\5\j\y\3\n\4\l\r\i\j\2\q\2\v\x\p\c\u\l\5\5\c\s\8\8\2\4\x\y\c\4\0\9\i\x\b\z\3\3\n\j\3\0\r\z\y\z\3\e\h\0\e\n\m\o\o\n\d\q\7\1\r\q\k\h\m\8\e\m\b\f\v\0\9\l\e\6\r\i\o\v\s\s\g\h\d\6\n\a\y\j\l\l\5\1\w\w\f\2\u ]] 00:07:34.359 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.359 12:32:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:34.359 [2024-07-15 12:32:06.958301] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:34.359 [2024-07-15 12:32:06.958469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63533 ] 00:07:34.618 [2024-07-15 12:32:07.101837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.618 [2024-07-15 12:32:07.216426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.618 [2024-07-15 12:32:07.270154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.877  Copying: 512/512 [B] (average 500 kBps) 00:07:34.877 00:07:34.877 12:32:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9nzva1m9rqiwqiniohaxjmwiljzg5a752uujcvsdhk5h6wyr845yys4a7601n8c4hpqi8n2hcb3swsb1n6eyyyiknntiqpqys6chzc75btg10m1hm9fhzq79ob9amnq68e2es3btcr5bvubvqs31hkef7zzg22xigmzw2j5ivnrgat15ube4gll95k1da6we4b23pyrla57ybwdjh1z04nr4el1h7nqzhe1m9slimxqr9b26fzygmiqyu9nrpqm0td6cex7scbpvh2725n2usbwk7gwbr06g9f6bqlmzosumwiw86fd953kiuyjuven86rrxtgwyefhjuhsis4dcjopjubsq1vifc44ejp4fj6zu0okfo2in29dibbuw0df9ayxexh4p5chuzxbxwmz9ij2290e75jy3n4lrij2q2vxpcul55cs8824xyc409ixbz33nj30rzyz3eh0enmoondq71rqkhm8embfv09le6riovssghd6nayjll51wwf2u == \9\n\z\v\a\1\m\9\r\q\i\w\q\i\n\i\o\h\a\x\j\m\w\i\l\j\z\g\5\a\7\5\2\u\u\j\c\v\s\d\h\k\5\h\6\w\y\r\8\4\5\y\y\s\4\a\7\6\0\1\n\8\c\4\h\p\q\i\8\n\2\h\c\b\3\s\w\s\b\1\n\6\e\y\y\y\i\k\n\n\t\i\q\p\q\y\s\6\c\h\z\c\7\5\b\t\g\1\0\m\1\h\m\9\f\h\z\q\7\9\o\b\9\a\m\n\q\6\8\e\2\e\s\3\b\t\c\r\5\b\v\u\b\v\q\s\3\1\h\k\e\f\7\z\z\g\2\2\x\i\g\m\z\w\2\j\5\i\v\n\r\g\a\t\1\5\u\b\e\4\g\l\l\9\5\k\1\d\a\6\w\e\4\b\2\3\p\y\r\l\a\5\7\y\b\w\d\j\h\1\z\0\4\n\r\4\e\l\1\h\7\n\q\z\h\e\1\m\9\s\l\i\m\x\q\r\9\b\2\6\f\z\y\g\m\i\q\y\u\9\n\r\p\q\m\0\t\d\6\c\e\x\7\s\c\b\p\v\h\2\7\2\5\n\2\u\s\b\w\k\7\g\w\b\r\0\6\g\9\f\6\b\q\l\m\z\o\s\u\m\w\i\w\8\6\f\d\9\5\3\k\i\u\y\j\u\v\e\n\8\6\r\r\x\t\g\w\y\e\f\h\j\u\h\s\i\s\4\d\c\j\o\p\j\u\b\s\q\1\v\i\f\c\4\4\e\j\p\4\f\j\6\z\u\0\o\k\f\o\2\i\n\2\9\d\i\b\b\u\w\0\d\f\9\a\y\x\e\x\h\4\p\5\c\h\u\z\x\b\x\w\m\z\9\i\j\2\2\9\0\e\7\5\j\y\3\n\4\l\r\i\j\2\q\2\v\x\p\c\u\l\5\5\c\s\8\8\2\4\x\y\c\4\0\9\i\x\b\z\3\3\n\j\3\0\r\z\y\z\3\e\h\0\e\n\m\o\o\n\d\q\7\1\r\q\k\h\m\8\e\m\b\f\v\0\9\l\e\6\r\i\o\v\s\s\g\h\d\6\n\a\y\j\l\l\5\1\w\w\f\2\u ]] 00:07:34.877 12:32:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.877 12:32:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:35.135 [2024-07-15 12:32:07.578357] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:35.135 [2024-07-15 12:32:07.578460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63546 ] 00:07:35.135 [2024-07-15 12:32:07.715437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.394 [2024-07-15 12:32:07.817981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.394 [2024-07-15 12:32:07.872192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.654  Copying: 512/512 [B] (average 166 kBps) 00:07:35.654 00:07:35.655 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9nzva1m9rqiwqiniohaxjmwiljzg5a752uujcvsdhk5h6wyr845yys4a7601n8c4hpqi8n2hcb3swsb1n6eyyyiknntiqpqys6chzc75btg10m1hm9fhzq79ob9amnq68e2es3btcr5bvubvqs31hkef7zzg22xigmzw2j5ivnrgat15ube4gll95k1da6we4b23pyrla57ybwdjh1z04nr4el1h7nqzhe1m9slimxqr9b26fzygmiqyu9nrpqm0td6cex7scbpvh2725n2usbwk7gwbr06g9f6bqlmzosumwiw86fd953kiuyjuven86rrxtgwyefhjuhsis4dcjopjubsq1vifc44ejp4fj6zu0okfo2in29dibbuw0df9ayxexh4p5chuzxbxwmz9ij2290e75jy3n4lrij2q2vxpcul55cs8824xyc409ixbz33nj30rzyz3eh0enmoondq71rqkhm8embfv09le6riovssghd6nayjll51wwf2u == \9\n\z\v\a\1\m\9\r\q\i\w\q\i\n\i\o\h\a\x\j\m\w\i\l\j\z\g\5\a\7\5\2\u\u\j\c\v\s\d\h\k\5\h\6\w\y\r\8\4\5\y\y\s\4\a\7\6\0\1\n\8\c\4\h\p\q\i\8\n\2\h\c\b\3\s\w\s\b\1\n\6\e\y\y\y\i\k\n\n\t\i\q\p\q\y\s\6\c\h\z\c\7\5\b\t\g\1\0\m\1\h\m\9\f\h\z\q\7\9\o\b\9\a\m\n\q\6\8\e\2\e\s\3\b\t\c\r\5\b\v\u\b\v\q\s\3\1\h\k\e\f\7\z\z\g\2\2\x\i\g\m\z\w\2\j\5\i\v\n\r\g\a\t\1\5\u\b\e\4\g\l\l\9\5\k\1\d\a\6\w\e\4\b\2\3\p\y\r\l\a\5\7\y\b\w\d\j\h\1\z\0\4\n\r\4\e\l\1\h\7\n\q\z\h\e\1\m\9\s\l\i\m\x\q\r\9\b\2\6\f\z\y\g\m\i\q\y\u\9\n\r\p\q\m\0\t\d\6\c\e\x\7\s\c\b\p\v\h\2\7\2\5\n\2\u\s\b\w\k\7\g\w\b\r\0\6\g\9\f\6\b\q\l\m\z\o\s\u\m\w\i\w\8\6\f\d\9\5\3\k\i\u\y\j\u\v\e\n\8\6\r\r\x\t\g\w\y\e\f\h\j\u\h\s\i\s\4\d\c\j\o\p\j\u\b\s\q\1\v\i\f\c\4\4\e\j\p\4\f\j\6\z\u\0\o\k\f\o\2\i\n\2\9\d\i\b\b\u\w\0\d\f\9\a\y\x\e\x\h\4\p\5\c\h\u\z\x\b\x\w\m\z\9\i\j\2\2\9\0\e\7\5\j\y\3\n\4\l\r\i\j\2\q\2\v\x\p\c\u\l\5\5\c\s\8\8\2\4\x\y\c\4\0\9\i\x\b\z\3\3\n\j\3\0\r\z\y\z\3\e\h\0\e\n\m\o\o\n\d\q\7\1\r\q\k\h\m\8\e\m\b\f\v\0\9\l\e\6\r\i\o\v\s\s\g\h\d\6\n\a\y\j\l\l\5\1\w\w\f\2\u ]] 00:07:35.655 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:35.655 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:35.655 [2024-07-15 12:32:08.186849] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:35.655 [2024-07-15 12:32:08.186938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63554 ] 00:07:35.655 [2024-07-15 12:32:08.325307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.913 [2024-07-15 12:32:08.419883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.913 [2024-07-15 12:32:08.473978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.172  Copying: 512/512 [B] (average 250 kBps) 00:07:36.172 00:07:36.172 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9nzva1m9rqiwqiniohaxjmwiljzg5a752uujcvsdhk5h6wyr845yys4a7601n8c4hpqi8n2hcb3swsb1n6eyyyiknntiqpqys6chzc75btg10m1hm9fhzq79ob9amnq68e2es3btcr5bvubvqs31hkef7zzg22xigmzw2j5ivnrgat15ube4gll95k1da6we4b23pyrla57ybwdjh1z04nr4el1h7nqzhe1m9slimxqr9b26fzygmiqyu9nrpqm0td6cex7scbpvh2725n2usbwk7gwbr06g9f6bqlmzosumwiw86fd953kiuyjuven86rrxtgwyefhjuhsis4dcjopjubsq1vifc44ejp4fj6zu0okfo2in29dibbuw0df9ayxexh4p5chuzxbxwmz9ij2290e75jy3n4lrij2q2vxpcul55cs8824xyc409ixbz33nj30rzyz3eh0enmoondq71rqkhm8embfv09le6riovssghd6nayjll51wwf2u == \9\n\z\v\a\1\m\9\r\q\i\w\q\i\n\i\o\h\a\x\j\m\w\i\l\j\z\g\5\a\7\5\2\u\u\j\c\v\s\d\h\k\5\h\6\w\y\r\8\4\5\y\y\s\4\a\7\6\0\1\n\8\c\4\h\p\q\i\8\n\2\h\c\b\3\s\w\s\b\1\n\6\e\y\y\y\i\k\n\n\t\i\q\p\q\y\s\6\c\h\z\c\7\5\b\t\g\1\0\m\1\h\m\9\f\h\z\q\7\9\o\b\9\a\m\n\q\6\8\e\2\e\s\3\b\t\c\r\5\b\v\u\b\v\q\s\3\1\h\k\e\f\7\z\z\g\2\2\x\i\g\m\z\w\2\j\5\i\v\n\r\g\a\t\1\5\u\b\e\4\g\l\l\9\5\k\1\d\a\6\w\e\4\b\2\3\p\y\r\l\a\5\7\y\b\w\d\j\h\1\z\0\4\n\r\4\e\l\1\h\7\n\q\z\h\e\1\m\9\s\l\i\m\x\q\r\9\b\2\6\f\z\y\g\m\i\q\y\u\9\n\r\p\q\m\0\t\d\6\c\e\x\7\s\c\b\p\v\h\2\7\2\5\n\2\u\s\b\w\k\7\g\w\b\r\0\6\g\9\f\6\b\q\l\m\z\o\s\u\m\w\i\w\8\6\f\d\9\5\3\k\i\u\y\j\u\v\e\n\8\6\r\r\x\t\g\w\y\e\f\h\j\u\h\s\i\s\4\d\c\j\o\p\j\u\b\s\q\1\v\i\f\c\4\4\e\j\p\4\f\j\6\z\u\0\o\k\f\o\2\i\n\2\9\d\i\b\b\u\w\0\d\f\9\a\y\x\e\x\h\4\p\5\c\h\u\z\x\b\x\w\m\z\9\i\j\2\2\9\0\e\7\5\j\y\3\n\4\l\r\i\j\2\q\2\v\x\p\c\u\l\5\5\c\s\8\8\2\4\x\y\c\4\0\9\i\x\b\z\3\3\n\j\3\0\r\z\y\z\3\e\h\0\e\n\m\o\o\n\d\q\7\1\r\q\k\h\m\8\e\m\b\f\v\0\9\l\e\6\r\i\o\v\s\s\g\h\d\6\n\a\y\j\l\l\5\1\w\w\f\2\u ]] 00:07:36.173 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:36.173 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:36.173 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:36.173 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:36.173 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.173 12:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:36.173 [2024-07-15 12:32:08.825224] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:36.173 [2024-07-15 12:32:08.825348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63561 ] 00:07:36.431 [2024-07-15 12:32:08.963785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.431 [2024-07-15 12:32:09.090108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.691 [2024-07-15 12:32:09.147805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.950  Copying: 512/512 [B] (average 500 kBps) 00:07:36.950 00:07:36.950 12:32:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ awdhg0nuwk5bj5995uk55gfyaqyvxjg9trgscod5jkk71tpawkgcszlvfbahcc6rby69voahjbw7uhi1w431ogfirv53n5qpi52wh69huvjoqxudf9eevejp21aixganm4427hia099t6gtx3v8qprc94ey3p3d4hpog5j37jz67ckjita10u78nanx8q3g6su9kf1z5wim1szvfhwc5er6pamyonu17ixxl7d3s2721iqbmvzx3t1kn6wkmnk9mffby3t9j119brjsdwyn89dimgnr0bb2wdej1zdwr447p3vdj1mtdn1fbcvqyi4sh8zklclfucem9d0im9hd0nkcupty53hdkuzuy7806uhqqlbdrfcjffrliej98jkcma8fgr7vydzrbcfurfbw1owzdqm9qhwpiospf0ajxn6pikgjl62qql2jemi5pnh4zs1oj8d5cduzwxghy3msq9lq421iwqyfbibxxjlcs5hu71p1ki9otuyygwnwy6j3w == \a\w\d\h\g\0\n\u\w\k\5\b\j\5\9\9\5\u\k\5\5\g\f\y\a\q\y\v\x\j\g\9\t\r\g\s\c\o\d\5\j\k\k\7\1\t\p\a\w\k\g\c\s\z\l\v\f\b\a\h\c\c\6\r\b\y\6\9\v\o\a\h\j\b\w\7\u\h\i\1\w\4\3\1\o\g\f\i\r\v\5\3\n\5\q\p\i\5\2\w\h\6\9\h\u\v\j\o\q\x\u\d\f\9\e\e\v\e\j\p\2\1\a\i\x\g\a\n\m\4\4\2\7\h\i\a\0\9\9\t\6\g\t\x\3\v\8\q\p\r\c\9\4\e\y\3\p\3\d\4\h\p\o\g\5\j\3\7\j\z\6\7\c\k\j\i\t\a\1\0\u\7\8\n\a\n\x\8\q\3\g\6\s\u\9\k\f\1\z\5\w\i\m\1\s\z\v\f\h\w\c\5\e\r\6\p\a\m\y\o\n\u\1\7\i\x\x\l\7\d\3\s\2\7\2\1\i\q\b\m\v\z\x\3\t\1\k\n\6\w\k\m\n\k\9\m\f\f\b\y\3\t\9\j\1\1\9\b\r\j\s\d\w\y\n\8\9\d\i\m\g\n\r\0\b\b\2\w\d\e\j\1\z\d\w\r\4\4\7\p\3\v\d\j\1\m\t\d\n\1\f\b\c\v\q\y\i\4\s\h\8\z\k\l\c\l\f\u\c\e\m\9\d\0\i\m\9\h\d\0\n\k\c\u\p\t\y\5\3\h\d\k\u\z\u\y\7\8\0\6\u\h\q\q\l\b\d\r\f\c\j\f\f\r\l\i\e\j\9\8\j\k\c\m\a\8\f\g\r\7\v\y\d\z\r\b\c\f\u\r\f\b\w\1\o\w\z\d\q\m\9\q\h\w\p\i\o\s\p\f\0\a\j\x\n\6\p\i\k\g\j\l\6\2\q\q\l\2\j\e\m\i\5\p\n\h\4\z\s\1\o\j\8\d\5\c\d\u\z\w\x\g\h\y\3\m\s\q\9\l\q\4\2\1\i\w\q\y\f\b\i\b\x\x\j\l\c\s\5\h\u\7\1\p\1\k\i\9\o\t\u\y\y\g\w\n\w\y\6\j\3\w ]] 00:07:36.950 12:32:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.950 12:32:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:36.950 [2024-07-15 12:32:09.471087] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:36.950 [2024-07-15 12:32:09.471181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63574 ] 00:07:36.950 [2024-07-15 12:32:09.609607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.208 [2024-07-15 12:32:09.729468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.208 [2024-07-15 12:32:09.791649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.468  Copying: 512/512 [B] (average 500 kBps) 00:07:37.468 00:07:37.468 12:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ awdhg0nuwk5bj5995uk55gfyaqyvxjg9trgscod5jkk71tpawkgcszlvfbahcc6rby69voahjbw7uhi1w431ogfirv53n5qpi52wh69huvjoqxudf9eevejp21aixganm4427hia099t6gtx3v8qprc94ey3p3d4hpog5j37jz67ckjita10u78nanx8q3g6su9kf1z5wim1szvfhwc5er6pamyonu17ixxl7d3s2721iqbmvzx3t1kn6wkmnk9mffby3t9j119brjsdwyn89dimgnr0bb2wdej1zdwr447p3vdj1mtdn1fbcvqyi4sh8zklclfucem9d0im9hd0nkcupty53hdkuzuy7806uhqqlbdrfcjffrliej98jkcma8fgr7vydzrbcfurfbw1owzdqm9qhwpiospf0ajxn6pikgjl62qql2jemi5pnh4zs1oj8d5cduzwxghy3msq9lq421iwqyfbibxxjlcs5hu71p1ki9otuyygwnwy6j3w == \a\w\d\h\g\0\n\u\w\k\5\b\j\5\9\9\5\u\k\5\5\g\f\y\a\q\y\v\x\j\g\9\t\r\g\s\c\o\d\5\j\k\k\7\1\t\p\a\w\k\g\c\s\z\l\v\f\b\a\h\c\c\6\r\b\y\6\9\v\o\a\h\j\b\w\7\u\h\i\1\w\4\3\1\o\g\f\i\r\v\5\3\n\5\q\p\i\5\2\w\h\6\9\h\u\v\j\o\q\x\u\d\f\9\e\e\v\e\j\p\2\1\a\i\x\g\a\n\m\4\4\2\7\h\i\a\0\9\9\t\6\g\t\x\3\v\8\q\p\r\c\9\4\e\y\3\p\3\d\4\h\p\o\g\5\j\3\7\j\z\6\7\c\k\j\i\t\a\1\0\u\7\8\n\a\n\x\8\q\3\g\6\s\u\9\k\f\1\z\5\w\i\m\1\s\z\v\f\h\w\c\5\e\r\6\p\a\m\y\o\n\u\1\7\i\x\x\l\7\d\3\s\2\7\2\1\i\q\b\m\v\z\x\3\t\1\k\n\6\w\k\m\n\k\9\m\f\f\b\y\3\t\9\j\1\1\9\b\r\j\s\d\w\y\n\8\9\d\i\m\g\n\r\0\b\b\2\w\d\e\j\1\z\d\w\r\4\4\7\p\3\v\d\j\1\m\t\d\n\1\f\b\c\v\q\y\i\4\s\h\8\z\k\l\c\l\f\u\c\e\m\9\d\0\i\m\9\h\d\0\n\k\c\u\p\t\y\5\3\h\d\k\u\z\u\y\7\8\0\6\u\h\q\q\l\b\d\r\f\c\j\f\f\r\l\i\e\j\9\8\j\k\c\m\a\8\f\g\r\7\v\y\d\z\r\b\c\f\u\r\f\b\w\1\o\w\z\d\q\m\9\q\h\w\p\i\o\s\p\f\0\a\j\x\n\6\p\i\k\g\j\l\6\2\q\q\l\2\j\e\m\i\5\p\n\h\4\z\s\1\o\j\8\d\5\c\d\u\z\w\x\g\h\y\3\m\s\q\9\l\q\4\2\1\i\w\q\y\f\b\i\b\x\x\j\l\c\s\5\h\u\7\1\p\1\k\i\9\o\t\u\y\y\g\w\n\w\y\6\j\3\w ]] 00:07:37.468 12:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:37.468 12:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:37.468 [2024-07-15 12:32:10.105374] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:37.468 [2024-07-15 12:32:10.105465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63576 ] 00:07:37.726 [2024-07-15 12:32:10.247520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.726 [2024-07-15 12:32:10.354341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.984 [2024-07-15 12:32:10.412065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.243  Copying: 512/512 [B] (average 125 kBps) 00:07:38.243 00:07:38.243 12:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ awdhg0nuwk5bj5995uk55gfyaqyvxjg9trgscod5jkk71tpawkgcszlvfbahcc6rby69voahjbw7uhi1w431ogfirv53n5qpi52wh69huvjoqxudf9eevejp21aixganm4427hia099t6gtx3v8qprc94ey3p3d4hpog5j37jz67ckjita10u78nanx8q3g6su9kf1z5wim1szvfhwc5er6pamyonu17ixxl7d3s2721iqbmvzx3t1kn6wkmnk9mffby3t9j119brjsdwyn89dimgnr0bb2wdej1zdwr447p3vdj1mtdn1fbcvqyi4sh8zklclfucem9d0im9hd0nkcupty53hdkuzuy7806uhqqlbdrfcjffrliej98jkcma8fgr7vydzrbcfurfbw1owzdqm9qhwpiospf0ajxn6pikgjl62qql2jemi5pnh4zs1oj8d5cduzwxghy3msq9lq421iwqyfbibxxjlcs5hu71p1ki9otuyygwnwy6j3w == \a\w\d\h\g\0\n\u\w\k\5\b\j\5\9\9\5\u\k\5\5\g\f\y\a\q\y\v\x\j\g\9\t\r\g\s\c\o\d\5\j\k\k\7\1\t\p\a\w\k\g\c\s\z\l\v\f\b\a\h\c\c\6\r\b\y\6\9\v\o\a\h\j\b\w\7\u\h\i\1\w\4\3\1\o\g\f\i\r\v\5\3\n\5\q\p\i\5\2\w\h\6\9\h\u\v\j\o\q\x\u\d\f\9\e\e\v\e\j\p\2\1\a\i\x\g\a\n\m\4\4\2\7\h\i\a\0\9\9\t\6\g\t\x\3\v\8\q\p\r\c\9\4\e\y\3\p\3\d\4\h\p\o\g\5\j\3\7\j\z\6\7\c\k\j\i\t\a\1\0\u\7\8\n\a\n\x\8\q\3\g\6\s\u\9\k\f\1\z\5\w\i\m\1\s\z\v\f\h\w\c\5\e\r\6\p\a\m\y\o\n\u\1\7\i\x\x\l\7\d\3\s\2\7\2\1\i\q\b\m\v\z\x\3\t\1\k\n\6\w\k\m\n\k\9\m\f\f\b\y\3\t\9\j\1\1\9\b\r\j\s\d\w\y\n\8\9\d\i\m\g\n\r\0\b\b\2\w\d\e\j\1\z\d\w\r\4\4\7\p\3\v\d\j\1\m\t\d\n\1\f\b\c\v\q\y\i\4\s\h\8\z\k\l\c\l\f\u\c\e\m\9\d\0\i\m\9\h\d\0\n\k\c\u\p\t\y\5\3\h\d\k\u\z\u\y\7\8\0\6\u\h\q\q\l\b\d\r\f\c\j\f\f\r\l\i\e\j\9\8\j\k\c\m\a\8\f\g\r\7\v\y\d\z\r\b\c\f\u\r\f\b\w\1\o\w\z\d\q\m\9\q\h\w\p\i\o\s\p\f\0\a\j\x\n\6\p\i\k\g\j\l\6\2\q\q\l\2\j\e\m\i\5\p\n\h\4\z\s\1\o\j\8\d\5\c\d\u\z\w\x\g\h\y\3\m\s\q\9\l\q\4\2\1\i\w\q\y\f\b\i\b\x\x\j\l\c\s\5\h\u\7\1\p\1\k\i\9\o\t\u\y\y\g\w\n\w\y\6\j\3\w ]] 00:07:38.243 12:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:38.243 12:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:38.243 [2024-07-15 12:32:10.737044] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:38.243 [2024-07-15 12:32:10.737168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63589 ] 00:07:38.243 [2024-07-15 12:32:10.875750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.502 [2024-07-15 12:32:10.981251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.502 [2024-07-15 12:32:11.034540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.762  Copying: 512/512 [B] (average 500 kBps) 00:07:38.762 00:07:38.762 12:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ awdhg0nuwk5bj5995uk55gfyaqyvxjg9trgscod5jkk71tpawkgcszlvfbahcc6rby69voahjbw7uhi1w431ogfirv53n5qpi52wh69huvjoqxudf9eevejp21aixganm4427hia099t6gtx3v8qprc94ey3p3d4hpog5j37jz67ckjita10u78nanx8q3g6su9kf1z5wim1szvfhwc5er6pamyonu17ixxl7d3s2721iqbmvzx3t1kn6wkmnk9mffby3t9j119brjsdwyn89dimgnr0bb2wdej1zdwr447p3vdj1mtdn1fbcvqyi4sh8zklclfucem9d0im9hd0nkcupty53hdkuzuy7806uhqqlbdrfcjffrliej98jkcma8fgr7vydzrbcfurfbw1owzdqm9qhwpiospf0ajxn6pikgjl62qql2jemi5pnh4zs1oj8d5cduzwxghy3msq9lq421iwqyfbibxxjlcs5hu71p1ki9otuyygwnwy6j3w == \a\w\d\h\g\0\n\u\w\k\5\b\j\5\9\9\5\u\k\5\5\g\f\y\a\q\y\v\x\j\g\9\t\r\g\s\c\o\d\5\j\k\k\7\1\t\p\a\w\k\g\c\s\z\l\v\f\b\a\h\c\c\6\r\b\y\6\9\v\o\a\h\j\b\w\7\u\h\i\1\w\4\3\1\o\g\f\i\r\v\5\3\n\5\q\p\i\5\2\w\h\6\9\h\u\v\j\o\q\x\u\d\f\9\e\e\v\e\j\p\2\1\a\i\x\g\a\n\m\4\4\2\7\h\i\a\0\9\9\t\6\g\t\x\3\v\8\q\p\r\c\9\4\e\y\3\p\3\d\4\h\p\o\g\5\j\3\7\j\z\6\7\c\k\j\i\t\a\1\0\u\7\8\n\a\n\x\8\q\3\g\6\s\u\9\k\f\1\z\5\w\i\m\1\s\z\v\f\h\w\c\5\e\r\6\p\a\m\y\o\n\u\1\7\i\x\x\l\7\d\3\s\2\7\2\1\i\q\b\m\v\z\x\3\t\1\k\n\6\w\k\m\n\k\9\m\f\f\b\y\3\t\9\j\1\1\9\b\r\j\s\d\w\y\n\8\9\d\i\m\g\n\r\0\b\b\2\w\d\e\j\1\z\d\w\r\4\4\7\p\3\v\d\j\1\m\t\d\n\1\f\b\c\v\q\y\i\4\s\h\8\z\k\l\c\l\f\u\c\e\m\9\d\0\i\m\9\h\d\0\n\k\c\u\p\t\y\5\3\h\d\k\u\z\u\y\7\8\0\6\u\h\q\q\l\b\d\r\f\c\j\f\f\r\l\i\e\j\9\8\j\k\c\m\a\8\f\g\r\7\v\y\d\z\r\b\c\f\u\r\f\b\w\1\o\w\z\d\q\m\9\q\h\w\p\i\o\s\p\f\0\a\j\x\n\6\p\i\k\g\j\l\6\2\q\q\l\2\j\e\m\i\5\p\n\h\4\z\s\1\o\j\8\d\5\c\d\u\z\w\x\g\h\y\3\m\s\q\9\l\q\4\2\1\i\w\q\y\f\b\i\b\x\x\j\l\c\s\5\h\u\7\1\p\1\k\i\9\o\t\u\y\y\g\w\n\w\y\6\j\3\w ]] 00:07:38.762 00:07:38.762 real 0m5.052s 00:07:38.762 user 0m2.839s 00:07:38.762 sys 0m1.225s 00:07:38.762 12:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.762 12:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:38.762 ************************************ 00:07:38.762 END TEST dd_flags_misc_forced_aio 00:07:38.762 ************************************ 00:07:38.762 12:32:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:38.762 12:32:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:38.762 12:32:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:38.762 12:32:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:38.762 00:07:38.762 real 0m22.568s 00:07:38.762 user 0m11.509s 00:07:38.762 sys 0m7.002s 00:07:38.762 12:32:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.762 12:32:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:38.762 ************************************ 00:07:38.762 END TEST spdk_dd_posix 00:07:38.762 ************************************ 00:07:38.762 12:32:11 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:38.762 12:32:11 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:38.762 12:32:11 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.762 12:32:11 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.762 12:32:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:38.762 ************************************ 00:07:38.762 START TEST spdk_dd_malloc 00:07:38.762 ************************************ 00:07:38.762 12:32:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:39.021 * Looking for test storage... 00:07:39.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:39.021 ************************************ 00:07:39.021 START TEST dd_malloc_copy 00:07:39.021 ************************************ 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:39.021 12:32:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.021 [2024-07-15 12:32:11.533678] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:39.022 [2024-07-15 12:32:11.533786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63663 ] 00:07:39.022 { 00:07:39.022 "subsystems": [ 00:07:39.022 { 00:07:39.022 "subsystem": "bdev", 00:07:39.022 "config": [ 00:07:39.022 { 00:07:39.022 "params": { 00:07:39.022 "block_size": 512, 00:07:39.022 "num_blocks": 1048576, 00:07:39.022 "name": "malloc0" 00:07:39.022 }, 00:07:39.022 "method": "bdev_malloc_create" 00:07:39.022 }, 00:07:39.022 { 00:07:39.022 "params": { 00:07:39.022 "block_size": 512, 00:07:39.022 "num_blocks": 1048576, 00:07:39.022 "name": "malloc1" 00:07:39.022 }, 00:07:39.022 "method": "bdev_malloc_create" 00:07:39.022 }, 00:07:39.022 { 00:07:39.022 "method": "bdev_wait_for_examine" 00:07:39.022 } 00:07:39.022 ] 00:07:39.022 } 00:07:39.022 ] 00:07:39.022 } 00:07:39.022 [2024-07-15 12:32:11.672449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.281 [2024-07-15 12:32:11.766502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.281 [2024-07-15 12:32:11.820152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.724  Copying: 205/512 [MB] (205 MBps) Copying: 393/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:07:42.724 00:07:42.724 12:32:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:42.724 12:32:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:42.724 12:32:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:42.724 12:32:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:42.982 [2024-07-15 12:32:15.445341] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:42.982 [2024-07-15 12:32:15.445499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63711 ] 00:07:42.982 { 00:07:42.982 "subsystems": [ 00:07:42.982 { 00:07:42.982 "subsystem": "bdev", 00:07:42.982 "config": [ 00:07:42.982 { 00:07:42.982 "params": { 00:07:42.982 "block_size": 512, 00:07:42.982 "num_blocks": 1048576, 00:07:42.982 "name": "malloc0" 00:07:42.982 }, 00:07:42.982 "method": "bdev_malloc_create" 00:07:42.982 }, 00:07:42.982 { 00:07:42.982 "params": { 00:07:42.982 "block_size": 512, 00:07:42.982 "num_blocks": 1048576, 00:07:42.982 "name": "malloc1" 00:07:42.982 }, 00:07:42.982 "method": "bdev_malloc_create" 00:07:42.982 }, 00:07:42.982 { 00:07:42.982 "method": "bdev_wait_for_examine" 00:07:42.982 } 00:07:42.982 ] 00:07:42.982 } 00:07:42.982 ] 00:07:42.982 } 00:07:42.982 [2024-07-15 12:32:15.589330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.239 [2024-07-15 12:32:15.697399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.239 [2024-07-15 12:32:15.756130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.688  Copying: 199/512 [MB] (199 MBps) Copying: 397/512 [MB] (197 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:07:46.688 00:07:46.688 00:07:46.688 real 0m7.789s 00:07:46.688 user 0m6.760s 00:07:46.688 sys 0m0.871s 00:07:46.688 12:32:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.688 ************************************ 00:07:46.688 END TEST dd_malloc_copy 00:07:46.688 12:32:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.688 ************************************ 00:07:46.688 12:32:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:46.688 00:07:46.688 real 0m7.913s 00:07:46.688 user 0m6.811s 00:07:46.688 sys 0m0.946s 00:07:46.688 12:32:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.688 ************************************ 00:07:46.688 END TEST spdk_dd_malloc 00:07:46.688 ************************************ 00:07:46.688 12:32:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:46.688 12:32:19 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:46.688 12:32:19 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:46.688 12:32:19 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:46.688 12:32:19 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.688 12:32:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:46.688 ************************************ 00:07:46.688 START TEST spdk_dd_bdev_to_bdev 00:07:46.688 ************************************ 00:07:46.688 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:46.947 * Looking for test storage... 00:07:46.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:46.947 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:46.948 ************************************ 00:07:46.948 START TEST dd_inflate_file 00:07:46.948 ************************************ 00:07:46.948 12:32:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:46.948 [2024-07-15 12:32:19.503586] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:46.948 [2024-07-15 12:32:19.503694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63821 ] 00:07:47.207 [2024-07-15 12:32:19.640218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.207 [2024-07-15 12:32:19.756597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.207 [2024-07-15 12:32:19.809071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:47.466  Copying: 64/64 [MB] (average 1454 MBps) 00:07:47.466 00:07:47.466 00:07:47.466 real 0m0.657s 00:07:47.466 user 0m0.419s 00:07:47.466 sys 0m0.293s 00:07:47.466 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.466 ************************************ 00:07:47.466 END TEST dd_inflate_file 00:07:47.466 ************************************ 00:07:47.466 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:47.725 ************************************ 00:07:47.725 START TEST dd_copy_to_out_bdev 00:07:47.725 ************************************ 00:07:47.725 12:32:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:47.725 { 00:07:47.725 "subsystems": [ 00:07:47.725 { 00:07:47.725 "subsystem": "bdev", 00:07:47.725 "config": [ 00:07:47.725 { 00:07:47.725 "params": { 00:07:47.726 "trtype": "pcie", 00:07:47.726 "traddr": "0000:00:10.0", 00:07:47.726 "name": "Nvme0" 00:07:47.726 }, 00:07:47.726 "method": "bdev_nvme_attach_controller" 00:07:47.726 }, 00:07:47.726 { 00:07:47.726 "params": { 00:07:47.726 "trtype": "pcie", 00:07:47.726 "traddr": "0000:00:11.0", 00:07:47.726 "name": "Nvme1" 00:07:47.726 }, 00:07:47.726 "method": "bdev_nvme_attach_controller" 00:07:47.726 }, 00:07:47.726 { 00:07:47.726 "method": "bdev_wait_for_examine" 00:07:47.726 } 00:07:47.726 ] 00:07:47.726 } 00:07:47.726 ] 00:07:47.726 } 00:07:47.726 [2024-07-15 12:32:20.213354] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:47.726 [2024-07-15 12:32:20.213451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63854 ] 00:07:47.726 [2024-07-15 12:32:20.350277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.985 [2024-07-15 12:32:20.465666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.985 [2024-07-15 12:32:20.518456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.380  Copying: 64/64 [MB] (average 67 MBps) 00:07:49.380 00:07:49.380 ************************************ 00:07:49.380 END TEST dd_copy_to_out_bdev 00:07:49.380 ************************************ 00:07:49.380 00:07:49.380 real 0m1.766s 00:07:49.380 user 0m1.531s 00:07:49.380 sys 0m1.314s 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:49.380 ************************************ 00:07:49.380 START TEST dd_offset_magic 00:07:49.380 ************************************ 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:49.380 12:32:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:49.380 [2024-07-15 12:32:22.030914] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:49.380 [2024-07-15 12:32:22.031022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63894 ] 00:07:49.380 { 00:07:49.380 "subsystems": [ 00:07:49.380 { 00:07:49.380 "subsystem": "bdev", 00:07:49.380 "config": [ 00:07:49.380 { 00:07:49.380 "params": { 00:07:49.380 "trtype": "pcie", 00:07:49.380 "traddr": "0000:00:10.0", 00:07:49.380 "name": "Nvme0" 00:07:49.380 }, 00:07:49.380 "method": "bdev_nvme_attach_controller" 00:07:49.380 }, 00:07:49.380 { 00:07:49.380 "params": { 00:07:49.380 "trtype": "pcie", 00:07:49.380 "traddr": "0000:00:11.0", 00:07:49.380 "name": "Nvme1" 00:07:49.380 }, 00:07:49.380 "method": "bdev_nvme_attach_controller" 00:07:49.380 }, 00:07:49.380 { 00:07:49.380 "method": "bdev_wait_for_examine" 00:07:49.380 } 00:07:49.380 ] 00:07:49.380 } 00:07:49.380 ] 00:07:49.380 } 00:07:49.639 [2024-07-15 12:32:22.162114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.639 [2024-07-15 12:32:22.276965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.898 [2024-07-15 12:32:22.329939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.157  Copying: 65/65 [MB] (average 970 MBps) 00:07:50.157 00:07:50.157 12:32:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:50.157 12:32:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:50.157 12:32:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:50.157 12:32:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:50.416 [2024-07-15 12:32:22.872616] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:50.416 [2024-07-15 12:32:22.872719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63914 ] 00:07:50.416 { 00:07:50.416 "subsystems": [ 00:07:50.416 { 00:07:50.416 "subsystem": "bdev", 00:07:50.416 "config": [ 00:07:50.416 { 00:07:50.416 "params": { 00:07:50.416 "trtype": "pcie", 00:07:50.416 "traddr": "0000:00:10.0", 00:07:50.416 "name": "Nvme0" 00:07:50.416 }, 00:07:50.416 "method": "bdev_nvme_attach_controller" 00:07:50.416 }, 00:07:50.416 { 00:07:50.416 "params": { 00:07:50.416 "trtype": "pcie", 00:07:50.416 "traddr": "0000:00:11.0", 00:07:50.416 "name": "Nvme1" 00:07:50.416 }, 00:07:50.416 "method": "bdev_nvme_attach_controller" 00:07:50.416 }, 00:07:50.416 { 00:07:50.416 "method": "bdev_wait_for_examine" 00:07:50.416 } 00:07:50.416 ] 00:07:50.416 } 00:07:50.416 ] 00:07:50.416 } 00:07:50.416 [2024-07-15 12:32:23.003399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.693 [2024-07-15 12:32:23.120398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.693 [2024-07-15 12:32:23.173771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.958  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:50.958 00:07:50.958 12:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:50.958 12:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:50.958 12:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:50.958 12:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:50.958 12:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:50.958 12:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:50.958 12:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:50.958 { 00:07:50.958 "subsystems": [ 00:07:50.958 { 00:07:50.958 "subsystem": "bdev", 00:07:50.958 "config": [ 00:07:50.958 { 00:07:50.958 "params": { 00:07:50.958 "trtype": "pcie", 00:07:50.958 "traddr": "0000:00:10.0", 00:07:50.958 "name": "Nvme0" 00:07:50.958 }, 00:07:50.958 "method": "bdev_nvme_attach_controller" 00:07:50.958 }, 00:07:50.958 { 00:07:50.958 "params": { 00:07:50.958 "trtype": "pcie", 00:07:50.958 "traddr": "0000:00:11.0", 00:07:50.958 "name": "Nvme1" 00:07:50.958 }, 00:07:50.958 "method": "bdev_nvme_attach_controller" 00:07:50.958 }, 00:07:50.958 { 00:07:50.958 "method": "bdev_wait_for_examine" 00:07:50.958 } 00:07:50.958 ] 00:07:50.958 } 00:07:50.958 ] 00:07:50.958 } 00:07:50.958 [2024-07-15 12:32:23.627931] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:50.958 [2024-07-15 12:32:23.628054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63936 ] 00:07:51.217 [2024-07-15 12:32:23.768240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.217 [2024-07-15 12:32:23.883756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.475 [2024-07-15 12:32:23.936787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.993  Copying: 65/65 [MB] (average 1083 MBps) 00:07:51.993 00:07:51.993 12:32:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:51.993 12:32:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:51.993 12:32:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:51.993 12:32:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:51.993 { 00:07:51.993 "subsystems": [ 00:07:51.993 { 00:07:51.993 "subsystem": "bdev", 00:07:51.993 "config": [ 00:07:51.993 { 00:07:51.993 "params": { 00:07:51.993 "trtype": "pcie", 00:07:51.993 "traddr": "0000:00:10.0", 00:07:51.993 "name": "Nvme0" 00:07:51.993 }, 00:07:51.993 "method": "bdev_nvme_attach_controller" 00:07:51.993 }, 00:07:51.993 { 00:07:51.993 "params": { 00:07:51.993 "trtype": "pcie", 00:07:51.993 "traddr": "0000:00:11.0", 00:07:51.993 "name": "Nvme1" 00:07:51.993 }, 00:07:51.993 "method": "bdev_nvme_attach_controller" 00:07:51.993 }, 00:07:51.993 { 00:07:51.993 "method": "bdev_wait_for_examine" 00:07:51.993 } 00:07:51.993 ] 00:07:51.993 } 00:07:51.993 ] 00:07:51.993 } 00:07:51.993 [2024-07-15 12:32:24.495760] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:51.993 [2024-07-15 12:32:24.495886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63956 ] 00:07:51.993 [2024-07-15 12:32:24.635696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.252 [2024-07-15 12:32:24.750699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.252 [2024-07-15 12:32:24.803582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.770  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:52.770 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:52.770 00:07:52.770 real 0m3.227s 00:07:52.770 user 0m2.350s 00:07:52.770 sys 0m0.911s 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.770 ************************************ 00:07:52.770 END TEST dd_offset_magic 00:07:52.770 ************************************ 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:52.770 12:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:52.770 [2024-07-15 12:32:25.295257] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:52.770 [2024-07-15 12:32:25.295408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:07:52.770 { 00:07:52.770 "subsystems": [ 00:07:52.770 { 00:07:52.770 "subsystem": "bdev", 00:07:52.770 "config": [ 00:07:52.770 { 00:07:52.770 "params": { 00:07:52.770 "trtype": "pcie", 00:07:52.770 "traddr": "0000:00:10.0", 00:07:52.770 "name": "Nvme0" 00:07:52.770 }, 00:07:52.770 "method": "bdev_nvme_attach_controller" 00:07:52.770 }, 00:07:52.770 { 00:07:52.770 "params": { 00:07:52.770 "trtype": "pcie", 00:07:52.770 "traddr": "0000:00:11.0", 00:07:52.770 "name": "Nvme1" 00:07:52.770 }, 00:07:52.770 "method": "bdev_nvme_attach_controller" 00:07:52.770 }, 00:07:52.770 { 00:07:52.770 "method": "bdev_wait_for_examine" 00:07:52.770 } 00:07:52.770 ] 00:07:52.770 } 00:07:52.770 ] 00:07:52.770 } 00:07:52.770 [2024-07-15 12:32:25.434149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.028 [2024-07-15 12:32:25.563405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.028 [2024-07-15 12:32:25.620298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.545  Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:53.545 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:53.545 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:53.545 [2024-07-15 12:32:26.063713] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:53.545 [2024-07-15 12:32:26.063812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64006 ] 00:07:53.545 { 00:07:53.545 "subsystems": [ 00:07:53.545 { 00:07:53.545 "subsystem": "bdev", 00:07:53.545 "config": [ 00:07:53.545 { 00:07:53.545 "params": { 00:07:53.545 "trtype": "pcie", 00:07:53.545 "traddr": "0000:00:10.0", 00:07:53.545 "name": "Nvme0" 00:07:53.545 }, 00:07:53.545 "method": "bdev_nvme_attach_controller" 00:07:53.545 }, 00:07:53.545 { 00:07:53.545 "params": { 00:07:53.545 "trtype": "pcie", 00:07:53.545 "traddr": "0000:00:11.0", 00:07:53.545 "name": "Nvme1" 00:07:53.545 }, 00:07:53.545 "method": "bdev_nvme_attach_controller" 00:07:53.545 }, 00:07:53.545 { 00:07:53.545 "method": "bdev_wait_for_examine" 00:07:53.545 } 00:07:53.545 ] 00:07:53.545 } 00:07:53.545 ] 00:07:53.545 } 00:07:53.545 [2024-07-15 12:32:26.197354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.803 [2024-07-15 12:32:26.314115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.803 [2024-07-15 12:32:26.367050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.319  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:54.320 00:07:54.320 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:54.320 00:07:54.320 real 0m7.439s 00:07:54.320 user 0m5.467s 00:07:54.320 sys 0m3.205s 00:07:54.320 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.320 ************************************ 00:07:54.320 END TEST spdk_dd_bdev_to_bdev 00:07:54.320 ************************************ 00:07:54.320 12:32:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:54.320 12:32:26 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:54.320 12:32:26 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:54.320 12:32:26 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:54.320 12:32:26 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.320 12:32:26 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.320 12:32:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:54.320 ************************************ 00:07:54.320 START TEST spdk_dd_uring 00:07:54.320 ************************************ 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:54.320 * Looking for test storage... 00:07:54.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:54.320 ************************************ 00:07:54.320 START TEST dd_uring_copy 00:07:54.320 ************************************ 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=3g4c8o2unqwsnex0hlse58qqtew600qi2748vgydbl7unfqp3hwni59ubvukfu4kyj4bvwg7l99vvu2mceeyox8x8pkn7jpn7rh654ezi1niri42v5ofyyqj35qtzyannxjrj6t9aodb84xykvm2j4ppljse5hsz73kdkz0ohfnf9kf5rr04fbliamljzk0uv2x1fbek88pcwxnm8f3octlnd229okxtrc5jjqkicqi4j0m2hdsf2p3web74he158t7rrvhdlfpr6w6ydk7y1uy465nrsdjeqh2m4yvs03du8mw4qqpg9w7mtmkkthfc12m9wqv7gz2mhj7ualhq3k9h9tp9cg236iamx2k8991n59ufwvfag7nb8fju6m5pjaoje2rnxyrhaevhusawm0wtmz89pay9cg9bnwx9ens4fjsj7tzf0fhy2c451xt8wmoq635y75rhg67ydiaoky694617tesgmcppflu0q7ddeao8151axhhb23bgzxo09zvyic02q1b2r1om8pj4sumuvubugrtj93rp5ksq1kbyizazo6lk28uk5a4ny72eridb6del8pat0ur29q5km5ns232qv6pozx3sdel0jdtxo170mb4dwim0v3eopz02qhu5j9z7s279nltn90635kffvat9jn62jtsifd59qy8s9no9u3tecdzk2e0oflrq4cysmtgeguztaltbuqfiry9ey90xocipys2kmd3cft2pqs94hxpx5uf6pknh545u6x4r6q82m6786709qxk3kjln4zl0pwlxlugqhpjis019g5r3kv7est3zvubzbyt6b90cgfvkorwbnd5mnzk0kdaqllhvvg1xv4ikh0kfv4hamyfsxctfovr848jyclbir19ows3by44a7mrn0ynz8ju4u8x5wi58e9z7zx8bhxj2fsclkc9njfhcs6sq867uymgcaug6n17ns7sh3js1jvbs2bhxrc7u8dpm9p68bfvflhcq5hbevlmzs5l6mgzm 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 3g4c8o2unqwsnex0hlse58qqtew600qi2748vgydbl7unfqp3hwni59ubvukfu4kyj4bvwg7l99vvu2mceeyox8x8pkn7jpn7rh654ezi1niri42v5ofyyqj35qtzyannxjrj6t9aodb84xykvm2j4ppljse5hsz73kdkz0ohfnf9kf5rr04fbliamljzk0uv2x1fbek88pcwxnm8f3octlnd229okxtrc5jjqkicqi4j0m2hdsf2p3web74he158t7rrvhdlfpr6w6ydk7y1uy465nrsdjeqh2m4yvs03du8mw4qqpg9w7mtmkkthfc12m9wqv7gz2mhj7ualhq3k9h9tp9cg236iamx2k8991n59ufwvfag7nb8fju6m5pjaoje2rnxyrhaevhusawm0wtmz89pay9cg9bnwx9ens4fjsj7tzf0fhy2c451xt8wmoq635y75rhg67ydiaoky694617tesgmcppflu0q7ddeao8151axhhb23bgzxo09zvyic02q1b2r1om8pj4sumuvubugrtj93rp5ksq1kbyizazo6lk28uk5a4ny72eridb6del8pat0ur29q5km5ns232qv6pozx3sdel0jdtxo170mb4dwim0v3eopz02qhu5j9z7s279nltn90635kffvat9jn62jtsifd59qy8s9no9u3tecdzk2e0oflrq4cysmtgeguztaltbuqfiry9ey90xocipys2kmd3cft2pqs94hxpx5uf6pknh545u6x4r6q82m6786709qxk3kjln4zl0pwlxlugqhpjis019g5r3kv7est3zvubzbyt6b90cgfvkorwbnd5mnzk0kdaqllhvvg1xv4ikh0kfv4hamyfsxctfovr848jyclbir19ows3by44a7mrn0ynz8ju4u8x5wi58e9z7zx8bhxj2fsclkc9njfhcs6sq867uymgcaug6n17ns7sh3js1jvbs2bhxrc7u8dpm9p68bfvflhcq5hbevlmzs5l6mgzm 00:07:54.320 12:32:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:54.578 [2024-07-15 12:32:27.019206] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:54.578 [2024-07-15 12:32:27.019308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64076 ] 00:07:54.578 [2024-07-15 12:32:27.160111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.836 [2024-07-15 12:32:27.289069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.836 [2024-07-15 12:32:27.346432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:56.036  Copying: 511/511 [MB] (average 1199 MBps) 00:07:56.036 00:07:56.036 12:32:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:56.036 12:32:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:56.036 12:32:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:56.036 12:32:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:56.036 { 00:07:56.036 "subsystems": [ 00:07:56.036 { 00:07:56.036 "subsystem": "bdev", 00:07:56.036 "config": [ 00:07:56.036 { 00:07:56.036 "params": { 00:07:56.036 "block_size": 512, 00:07:56.036 "num_blocks": 1048576, 00:07:56.036 "name": "malloc0" 00:07:56.036 }, 00:07:56.036 "method": "bdev_malloc_create" 00:07:56.036 }, 00:07:56.036 { 00:07:56.036 "params": { 00:07:56.036 "filename": "/dev/zram1", 00:07:56.036 "name": "uring0" 00:07:56.036 }, 00:07:56.036 "method": "bdev_uring_create" 00:07:56.036 }, 00:07:56.036 { 00:07:56.036 "method": "bdev_wait_for_examine" 00:07:56.036 } 00:07:56.036 ] 00:07:56.036 } 00:07:56.036 ] 00:07:56.036 } 00:07:56.036 [2024-07-15 12:32:28.454763] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:56.036 [2024-07-15 12:32:28.454866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64092 ] 00:07:56.036 [2024-07-15 12:32:28.593549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.296 [2024-07-15 12:32:28.710647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.296 [2024-07-15 12:32:28.765896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.438  Copying: 208/512 [MB] (208 MBps) Copying: 418/512 [MB] (209 MBps) Copying: 512/512 [MB] (average 209 MBps) 00:07:59.438 00:07:59.438 12:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:59.438 12:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:59.438 12:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:59.438 12:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:59.438 [2024-07-15 12:32:31.876822] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:59.438 [2024-07-15 12:32:31.876927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64147 ] 00:07:59.438 { 00:07:59.438 "subsystems": [ 00:07:59.438 { 00:07:59.438 "subsystem": "bdev", 00:07:59.438 "config": [ 00:07:59.438 { 00:07:59.438 "params": { 00:07:59.438 "block_size": 512, 00:07:59.438 "num_blocks": 1048576, 00:07:59.438 "name": "malloc0" 00:07:59.438 }, 00:07:59.438 "method": "bdev_malloc_create" 00:07:59.438 }, 00:07:59.438 { 00:07:59.438 "params": { 00:07:59.438 "filename": "/dev/zram1", 00:07:59.438 "name": "uring0" 00:07:59.438 }, 00:07:59.438 "method": "bdev_uring_create" 00:07:59.438 }, 00:07:59.438 { 00:07:59.438 "method": "bdev_wait_for_examine" 00:07:59.438 } 00:07:59.438 ] 00:07:59.438 } 00:07:59.438 ] 00:07:59.438 } 00:07:59.438 [2024-07-15 12:32:32.016276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.697 [2024-07-15 12:32:32.135854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.697 [2024-07-15 12:32:32.192551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.511  Copying: 169/512 [MB] (169 MBps) Copying: 326/512 [MB] (157 MBps) Copying: 483/512 [MB] (156 MBps) Copying: 512/512 [MB] (average 161 MBps) 00:08:03.511 00:08:03.511 12:32:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:03.511 12:32:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 3g4c8o2unqwsnex0hlse58qqtew600qi2748vgydbl7unfqp3hwni59ubvukfu4kyj4bvwg7l99vvu2mceeyox8x8pkn7jpn7rh654ezi1niri42v5ofyyqj35qtzyannxjrj6t9aodb84xykvm2j4ppljse5hsz73kdkz0ohfnf9kf5rr04fbliamljzk0uv2x1fbek88pcwxnm8f3octlnd229okxtrc5jjqkicqi4j0m2hdsf2p3web74he158t7rrvhdlfpr6w6ydk7y1uy465nrsdjeqh2m4yvs03du8mw4qqpg9w7mtmkkthfc12m9wqv7gz2mhj7ualhq3k9h9tp9cg236iamx2k8991n59ufwvfag7nb8fju6m5pjaoje2rnxyrhaevhusawm0wtmz89pay9cg9bnwx9ens4fjsj7tzf0fhy2c451xt8wmoq635y75rhg67ydiaoky694617tesgmcppflu0q7ddeao8151axhhb23bgzxo09zvyic02q1b2r1om8pj4sumuvubugrtj93rp5ksq1kbyizazo6lk28uk5a4ny72eridb6del8pat0ur29q5km5ns232qv6pozx3sdel0jdtxo170mb4dwim0v3eopz02qhu5j9z7s279nltn90635kffvat9jn62jtsifd59qy8s9no9u3tecdzk2e0oflrq4cysmtgeguztaltbuqfiry9ey90xocipys2kmd3cft2pqs94hxpx5uf6pknh545u6x4r6q82m6786709qxk3kjln4zl0pwlxlugqhpjis019g5r3kv7est3zvubzbyt6b90cgfvkorwbnd5mnzk0kdaqllhvvg1xv4ikh0kfv4hamyfsxctfovr848jyclbir19ows3by44a7mrn0ynz8ju4u8x5wi58e9z7zx8bhxj2fsclkc9njfhcs6sq867uymgcaug6n17ns7sh3js1jvbs2bhxrc7u8dpm9p68bfvflhcq5hbevlmzs5l6mgzm == \3\g\4\c\8\o\2\u\n\q\w\s\n\e\x\0\h\l\s\e\5\8\q\q\t\e\w\6\0\0\q\i\2\7\4\8\v\g\y\d\b\l\7\u\n\f\q\p\3\h\w\n\i\5\9\u\b\v\u\k\f\u\4\k\y\j\4\b\v\w\g\7\l\9\9\v\v\u\2\m\c\e\e\y\o\x\8\x\8\p\k\n\7\j\p\n\7\r\h\6\5\4\e\z\i\1\n\i\r\i\4\2\v\5\o\f\y\y\q\j\3\5\q\t\z\y\a\n\n\x\j\r\j\6\t\9\a\o\d\b\8\4\x\y\k\v\m\2\j\4\p\p\l\j\s\e\5\h\s\z\7\3\k\d\k\z\0\o\h\f\n\f\9\k\f\5\r\r\0\4\f\b\l\i\a\m\l\j\z\k\0\u\v\2\x\1\f\b\e\k\8\8\p\c\w\x\n\m\8\f\3\o\c\t\l\n\d\2\2\9\o\k\x\t\r\c\5\j\j\q\k\i\c\q\i\4\j\0\m\2\h\d\s\f\2\p\3\w\e\b\7\4\h\e\1\5\8\t\7\r\r\v\h\d\l\f\p\r\6\w\6\y\d\k\7\y\1\u\y\4\6\5\n\r\s\d\j\e\q\h\2\m\4\y\v\s\0\3\d\u\8\m\w\4\q\q\p\g\9\w\7\m\t\m\k\k\t\h\f\c\1\2\m\9\w\q\v\7\g\z\2\m\h\j\7\u\a\l\h\q\3\k\9\h\9\t\p\9\c\g\2\3\6\i\a\m\x\2\k\8\9\9\1\n\5\9\u\f\w\v\f\a\g\7\n\b\8\f\j\u\6\m\5\p\j\a\o\j\e\2\r\n\x\y\r\h\a\e\v\h\u\s\a\w\m\0\w\t\m\z\8\9\p\a\y\9\c\g\9\b\n\w\x\9\e\n\s\4\f\j\s\j\7\t\z\f\0\f\h\y\2\c\4\5\1\x\t\8\w\m\o\q\6\3\5\y\7\5\r\h\g\6\7\y\d\i\a\o\k\y\6\9\4\6\1\7\t\e\s\g\m\c\p\p\f\l\u\0\q\7\d\d\e\a\o\8\1\5\1\a\x\h\h\b\2\3\b\g\z\x\o\0\9\z\v\y\i\c\0\2\q\1\b\2\r\1\o\m\8\p\j\4\s\u\m\u\v\u\b\u\g\r\t\j\9\3\r\p\5\k\s\q\1\k\b\y\i\z\a\z\o\6\l\k\2\8\u\k\5\a\4\n\y\7\2\e\r\i\d\b\6\d\e\l\8\p\a\t\0\u\r\2\9\q\5\k\m\5\n\s\2\3\2\q\v\6\p\o\z\x\3\s\d\e\l\0\j\d\t\x\o\1\7\0\m\b\4\d\w\i\m\0\v\3\e\o\p\z\0\2\q\h\u\5\j\9\z\7\s\2\7\9\n\l\t\n\9\0\6\3\5\k\f\f\v\a\t\9\j\n\6\2\j\t\s\i\f\d\5\9\q\y\8\s\9\n\o\9\u\3\t\e\c\d\z\k\2\e\0\o\f\l\r\q\4\c\y\s\m\t\g\e\g\u\z\t\a\l\t\b\u\q\f\i\r\y\9\e\y\9\0\x\o\c\i\p\y\s\2\k\m\d\3\c\f\t\2\p\q\s\9\4\h\x\p\x\5\u\f\6\p\k\n\h\5\4\5\u\6\x\4\r\6\q\8\2\m\6\7\8\6\7\0\9\q\x\k\3\k\j\l\n\4\z\l\0\p\w\l\x\l\u\g\q\h\p\j\i\s\0\1\9\g\5\r\3\k\v\7\e\s\t\3\z\v\u\b\z\b\y\t\6\b\9\0\c\g\f\v\k\o\r\w\b\n\d\5\m\n\z\k\0\k\d\a\q\l\l\h\v\v\g\1\x\v\4\i\k\h\0\k\f\v\4\h\a\m\y\f\s\x\c\t\f\o\v\r\8\4\8\j\y\c\l\b\i\r\1\9\o\w\s\3\b\y\4\4\a\7\m\r\n\0\y\n\z\8\j\u\4\u\8\x\5\w\i\5\8\e\9\z\7\z\x\8\b\h\x\j\2\f\s\c\l\k\c\9\n\j\f\h\c\s\6\s\q\8\6\7\u\y\m\g\c\a\u\g\6\n\1\7\n\s\7\s\h\3\j\s\1\j\v\b\s\2\b\h\x\r\c\7\u\8\d\p\m\9\p\6\8\b\f\v\f\l\h\c\q\5\h\b\e\v\l\m\z\s\5\l\6\m\g\z\m ]] 00:08:03.511 12:32:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:03.511 12:32:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 3g4c8o2unqwsnex0hlse58qqtew600qi2748vgydbl7unfqp3hwni59ubvukfu4kyj4bvwg7l99vvu2mceeyox8x8pkn7jpn7rh654ezi1niri42v5ofyyqj35qtzyannxjrj6t9aodb84xykvm2j4ppljse5hsz73kdkz0ohfnf9kf5rr04fbliamljzk0uv2x1fbek88pcwxnm8f3octlnd229okxtrc5jjqkicqi4j0m2hdsf2p3web74he158t7rrvhdlfpr6w6ydk7y1uy465nrsdjeqh2m4yvs03du8mw4qqpg9w7mtmkkthfc12m9wqv7gz2mhj7ualhq3k9h9tp9cg236iamx2k8991n59ufwvfag7nb8fju6m5pjaoje2rnxyrhaevhusawm0wtmz89pay9cg9bnwx9ens4fjsj7tzf0fhy2c451xt8wmoq635y75rhg67ydiaoky694617tesgmcppflu0q7ddeao8151axhhb23bgzxo09zvyic02q1b2r1om8pj4sumuvubugrtj93rp5ksq1kbyizazo6lk28uk5a4ny72eridb6del8pat0ur29q5km5ns232qv6pozx3sdel0jdtxo170mb4dwim0v3eopz02qhu5j9z7s279nltn90635kffvat9jn62jtsifd59qy8s9no9u3tecdzk2e0oflrq4cysmtgeguztaltbuqfiry9ey90xocipys2kmd3cft2pqs94hxpx5uf6pknh545u6x4r6q82m6786709qxk3kjln4zl0pwlxlugqhpjis019g5r3kv7est3zvubzbyt6b90cgfvkorwbnd5mnzk0kdaqllhvvg1xv4ikh0kfv4hamyfsxctfovr848jyclbir19ows3by44a7mrn0ynz8ju4u8x5wi58e9z7zx8bhxj2fsclkc9njfhcs6sq867uymgcaug6n17ns7sh3js1jvbs2bhxrc7u8dpm9p68bfvflhcq5hbevlmzs5l6mgzm == \3\g\4\c\8\o\2\u\n\q\w\s\n\e\x\0\h\l\s\e\5\8\q\q\t\e\w\6\0\0\q\i\2\7\4\8\v\g\y\d\b\l\7\u\n\f\q\p\3\h\w\n\i\5\9\u\b\v\u\k\f\u\4\k\y\j\4\b\v\w\g\7\l\9\9\v\v\u\2\m\c\e\e\y\o\x\8\x\8\p\k\n\7\j\p\n\7\r\h\6\5\4\e\z\i\1\n\i\r\i\4\2\v\5\o\f\y\y\q\j\3\5\q\t\z\y\a\n\n\x\j\r\j\6\t\9\a\o\d\b\8\4\x\y\k\v\m\2\j\4\p\p\l\j\s\e\5\h\s\z\7\3\k\d\k\z\0\o\h\f\n\f\9\k\f\5\r\r\0\4\f\b\l\i\a\m\l\j\z\k\0\u\v\2\x\1\f\b\e\k\8\8\p\c\w\x\n\m\8\f\3\o\c\t\l\n\d\2\2\9\o\k\x\t\r\c\5\j\j\q\k\i\c\q\i\4\j\0\m\2\h\d\s\f\2\p\3\w\e\b\7\4\h\e\1\5\8\t\7\r\r\v\h\d\l\f\p\r\6\w\6\y\d\k\7\y\1\u\y\4\6\5\n\r\s\d\j\e\q\h\2\m\4\y\v\s\0\3\d\u\8\m\w\4\q\q\p\g\9\w\7\m\t\m\k\k\t\h\f\c\1\2\m\9\w\q\v\7\g\z\2\m\h\j\7\u\a\l\h\q\3\k\9\h\9\t\p\9\c\g\2\3\6\i\a\m\x\2\k\8\9\9\1\n\5\9\u\f\w\v\f\a\g\7\n\b\8\f\j\u\6\m\5\p\j\a\o\j\e\2\r\n\x\y\r\h\a\e\v\h\u\s\a\w\m\0\w\t\m\z\8\9\p\a\y\9\c\g\9\b\n\w\x\9\e\n\s\4\f\j\s\j\7\t\z\f\0\f\h\y\2\c\4\5\1\x\t\8\w\m\o\q\6\3\5\y\7\5\r\h\g\6\7\y\d\i\a\o\k\y\6\9\4\6\1\7\t\e\s\g\m\c\p\p\f\l\u\0\q\7\d\d\e\a\o\8\1\5\1\a\x\h\h\b\2\3\b\g\z\x\o\0\9\z\v\y\i\c\0\2\q\1\b\2\r\1\o\m\8\p\j\4\s\u\m\u\v\u\b\u\g\r\t\j\9\3\r\p\5\k\s\q\1\k\b\y\i\z\a\z\o\6\l\k\2\8\u\k\5\a\4\n\y\7\2\e\r\i\d\b\6\d\e\l\8\p\a\t\0\u\r\2\9\q\5\k\m\5\n\s\2\3\2\q\v\6\p\o\z\x\3\s\d\e\l\0\j\d\t\x\o\1\7\0\m\b\4\d\w\i\m\0\v\3\e\o\p\z\0\2\q\h\u\5\j\9\z\7\s\2\7\9\n\l\t\n\9\0\6\3\5\k\f\f\v\a\t\9\j\n\6\2\j\t\s\i\f\d\5\9\q\y\8\s\9\n\o\9\u\3\t\e\c\d\z\k\2\e\0\o\f\l\r\q\4\c\y\s\m\t\g\e\g\u\z\t\a\l\t\b\u\q\f\i\r\y\9\e\y\9\0\x\o\c\i\p\y\s\2\k\m\d\3\c\f\t\2\p\q\s\9\4\h\x\p\x\5\u\f\6\p\k\n\h\5\4\5\u\6\x\4\r\6\q\8\2\m\6\7\8\6\7\0\9\q\x\k\3\k\j\l\n\4\z\l\0\p\w\l\x\l\u\g\q\h\p\j\i\s\0\1\9\g\5\r\3\k\v\7\e\s\t\3\z\v\u\b\z\b\y\t\6\b\9\0\c\g\f\v\k\o\r\w\b\n\d\5\m\n\z\k\0\k\d\a\q\l\l\h\v\v\g\1\x\v\4\i\k\h\0\k\f\v\4\h\a\m\y\f\s\x\c\t\f\o\v\r\8\4\8\j\y\c\l\b\i\r\1\9\o\w\s\3\b\y\4\4\a\7\m\r\n\0\y\n\z\8\j\u\4\u\8\x\5\w\i\5\8\e\9\z\7\z\x\8\b\h\x\j\2\f\s\c\l\k\c\9\n\j\f\h\c\s\6\s\q\8\6\7\u\y\m\g\c\a\u\g\6\n\1\7\n\s\7\s\h\3\j\s\1\j\v\b\s\2\b\h\x\r\c\7\u\8\d\p\m\9\p\6\8\b\f\v\f\l\h\c\q\5\h\b\e\v\l\m\z\s\5\l\6\m\g\z\m ]] 00:08:03.511 12:32:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:03.773 12:32:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:03.773 12:32:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:03.773 12:32:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:03.773 12:32:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:03.773 [2024-07-15 12:32:36.423452] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:03.773 [2024-07-15 12:32:36.423540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64218 ] 00:08:03.773 { 00:08:03.773 "subsystems": [ 00:08:03.773 { 00:08:03.773 "subsystem": "bdev", 00:08:03.773 "config": [ 00:08:03.773 { 00:08:03.773 "params": { 00:08:03.773 "block_size": 512, 00:08:03.773 "num_blocks": 1048576, 00:08:03.773 "name": "malloc0" 00:08:03.773 }, 00:08:03.773 "method": "bdev_malloc_create" 00:08:03.773 }, 00:08:03.773 { 00:08:03.773 "params": { 00:08:03.773 "filename": "/dev/zram1", 00:08:03.773 "name": "uring0" 00:08:03.773 }, 00:08:03.773 "method": "bdev_uring_create" 00:08:03.773 }, 00:08:03.773 { 00:08:03.773 "method": "bdev_wait_for_examine" 00:08:03.773 } 00:08:03.773 ] 00:08:03.773 } 00:08:03.773 ] 00:08:03.773 } 00:08:04.035 [2024-07-15 12:32:36.559913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.035 [2024-07-15 12:32:36.696465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.293 [2024-07-15 12:32:36.759170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.363  Copying: 147/512 [MB] (147 MBps) Copying: 294/512 [MB] (147 MBps) Copying: 442/512 [MB] (147 MBps) Copying: 512/512 [MB] (average 147 MBps) 00:08:08.363 00:08:08.363 12:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:08.363 12:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:08.363 12:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:08.363 12:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:08.364 12:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:08.364 12:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:08.364 12:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:08.364 12:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:08.364 [2024-07-15 12:32:40.900859] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:08.364 [2024-07-15 12:32:40.901686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64285 ] 00:08:08.364 { 00:08:08.364 "subsystems": [ 00:08:08.364 { 00:08:08.364 "subsystem": "bdev", 00:08:08.364 "config": [ 00:08:08.364 { 00:08:08.364 "params": { 00:08:08.364 "block_size": 512, 00:08:08.364 "num_blocks": 1048576, 00:08:08.364 "name": "malloc0" 00:08:08.364 }, 00:08:08.364 "method": "bdev_malloc_create" 00:08:08.364 }, 00:08:08.364 { 00:08:08.364 "params": { 00:08:08.364 "filename": "/dev/zram1", 00:08:08.364 "name": "uring0" 00:08:08.364 }, 00:08:08.364 "method": "bdev_uring_create" 00:08:08.364 }, 00:08:08.364 { 00:08:08.364 "params": { 00:08:08.364 "name": "uring0" 00:08:08.364 }, 00:08:08.364 "method": "bdev_uring_delete" 00:08:08.364 }, 00:08:08.364 { 00:08:08.364 "method": "bdev_wait_for_examine" 00:08:08.364 } 00:08:08.364 ] 00:08:08.364 } 00:08:08.364 ] 00:08:08.364 } 00:08:08.364 [2024-07-15 12:32:41.041347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.621 [2024-07-15 12:32:41.159559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.621 [2024-07-15 12:32:41.215343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.445  Copying: 0/0 [B] (average 0 Bps) 00:08:09.445 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:09.445 12:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:09.445 { 00:08:09.445 "subsystems": [ 00:08:09.445 { 00:08:09.445 "subsystem": "bdev", 00:08:09.445 "config": [ 00:08:09.445 { 00:08:09.445 "params": { 00:08:09.445 "block_size": 512, 00:08:09.445 "num_blocks": 1048576, 00:08:09.445 "name": "malloc0" 00:08:09.445 }, 00:08:09.445 "method": "bdev_malloc_create" 00:08:09.445 }, 00:08:09.445 { 00:08:09.445 "params": { 00:08:09.445 "filename": "/dev/zram1", 00:08:09.445 "name": "uring0" 00:08:09.445 }, 00:08:09.445 "method": "bdev_uring_create" 00:08:09.445 }, 00:08:09.445 { 00:08:09.445 "params": { 00:08:09.445 "name": "uring0" 00:08:09.445 }, 00:08:09.445 "method": "bdev_uring_delete" 00:08:09.445 }, 00:08:09.445 { 00:08:09.445 "method": "bdev_wait_for_examine" 00:08:09.445 } 00:08:09.445 ] 00:08:09.445 } 00:08:09.445 ] 00:08:09.445 } 00:08:09.445 [2024-07-15 12:32:41.913194] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:09.445 [2024-07-15 12:32:41.913326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64314 ] 00:08:09.445 [2024-07-15 12:32:42.055079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.703 [2024-07-15 12:32:42.172905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.703 [2024-07-15 12:32:42.227897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.962 [2024-07-15 12:32:42.430859] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:09.963 [2024-07-15 12:32:42.430916] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:09.963 [2024-07-15 12:32:42.430930] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:09.963 [2024-07-15 12:32:42.430942] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.221 [2024-07-15 12:32:42.742385] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:08:10.221 12:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:10.480 ************************************ 00:08:10.480 END TEST dd_uring_copy 00:08:10.480 ************************************ 00:08:10.480 00:08:10.480 real 0m16.167s 00:08:10.480 user 0m10.896s 00:08:10.480 sys 0m13.305s 00:08:10.480 12:32:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.480 12:32:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:10.480 12:32:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:08:10.480 ************************************ 00:08:10.480 END TEST spdk_dd_uring 00:08:10.480 ************************************ 00:08:10.480 00:08:10.480 real 0m16.306s 00:08:10.480 user 0m10.946s 00:08:10.480 sys 0m13.396s 00:08:10.480 12:32:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.480 12:32:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:10.739 12:32:43 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:10.739 12:32:43 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:10.739 12:32:43 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:10.739 12:32:43 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.739 12:32:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:10.739 ************************************ 00:08:10.739 START TEST spdk_dd_sparse 00:08:10.739 ************************************ 00:08:10.739 12:32:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:10.739 * Looking for test storage... 00:08:10.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:10.739 12:32:43 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.739 12:32:43 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.739 12:32:43 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.739 12:32:43 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.739 12:32:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.739 12:32:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:10.740 1+0 records in 00:08:10.740 1+0 records out 00:08:10.740 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00730142 s, 574 MB/s 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:10.740 1+0 records in 00:08:10.740 1+0 records out 00:08:10.740 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00639275 s, 656 MB/s 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:10.740 1+0 records in 00:08:10.740 1+0 records out 00:08:10.740 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00429459 s, 977 MB/s 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:10.740 ************************************ 00:08:10.740 START TEST dd_sparse_file_to_file 00:08:10.740 ************************************ 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:10.740 12:32:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:10.740 [2024-07-15 12:32:43.373286] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:10.740 [2024-07-15 12:32:43.373375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64400 ] 00:08:10.740 { 00:08:10.740 "subsystems": [ 00:08:10.740 { 00:08:10.740 "subsystem": "bdev", 00:08:10.740 "config": [ 00:08:10.740 { 00:08:10.740 "params": { 00:08:10.740 "block_size": 4096, 00:08:10.740 "filename": "dd_sparse_aio_disk", 00:08:10.740 "name": "dd_aio" 00:08:10.740 }, 00:08:10.740 "method": "bdev_aio_create" 00:08:10.740 }, 00:08:10.740 { 00:08:10.740 "params": { 00:08:10.740 "lvs_name": "dd_lvstore", 00:08:10.740 "bdev_name": "dd_aio" 00:08:10.740 }, 00:08:10.740 "method": "bdev_lvol_create_lvstore" 00:08:10.740 }, 00:08:10.740 { 00:08:10.740 "method": "bdev_wait_for_examine" 00:08:10.740 } 00:08:10.740 ] 00:08:10.740 } 00:08:10.740 ] 00:08:10.740 } 00:08:10.998 [2024-07-15 12:32:43.506968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.999 [2024-07-15 12:32:43.626797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.338 [2024-07-15 12:32:43.683408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.621  Copying: 12/36 [MB] (average 600 MBps) 00:08:11.621 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:11.621 00:08:11.621 real 0m0.729s 00:08:11.621 user 0m0.471s 00:08:11.621 sys 0m0.365s 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:11.621 ************************************ 00:08:11.621 END TEST dd_sparse_file_to_file 00:08:11.621 ************************************ 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:11.621 ************************************ 00:08:11.621 START TEST dd_sparse_file_to_bdev 00:08:11.621 ************************************ 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:11.621 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:11.621 [2024-07-15 12:32:44.154656] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:11.621 [2024-07-15 12:32:44.154772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64448 ] 00:08:11.621 { 00:08:11.621 "subsystems": [ 00:08:11.621 { 00:08:11.621 "subsystem": "bdev", 00:08:11.621 "config": [ 00:08:11.621 { 00:08:11.621 "params": { 00:08:11.621 "block_size": 4096, 00:08:11.621 "filename": "dd_sparse_aio_disk", 00:08:11.621 "name": "dd_aio" 00:08:11.621 }, 00:08:11.621 "method": "bdev_aio_create" 00:08:11.621 }, 00:08:11.621 { 00:08:11.621 "params": { 00:08:11.621 "lvs_name": "dd_lvstore", 00:08:11.621 "lvol_name": "dd_lvol", 00:08:11.621 "size_in_mib": 36, 00:08:11.621 "thin_provision": true 00:08:11.621 }, 00:08:11.621 "method": "bdev_lvol_create" 00:08:11.621 }, 00:08:11.621 { 00:08:11.621 "method": "bdev_wait_for_examine" 00:08:11.621 } 00:08:11.621 ] 00:08:11.621 } 00:08:11.621 ] 00:08:11.621 } 00:08:11.621 [2024-07-15 12:32:44.293618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.880 [2024-07-15 12:32:44.413828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.880 [2024-07-15 12:32:44.470776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.139  Copying: 12/36 [MB] (average 521 MBps) 00:08:12.139 00:08:12.139 00:08:12.139 real 0m0.713s 00:08:12.139 user 0m0.468s 00:08:12.139 sys 0m0.348s 00:08:12.139 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.139 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.139 ************************************ 00:08:12.139 END TEST dd_sparse_file_to_bdev 00:08:12.139 ************************************ 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:12.398 ************************************ 00:08:12.398 START TEST dd_sparse_bdev_to_file 00:08:12.398 ************************************ 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:12.398 12:32:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:12.398 [2024-07-15 12:32:44.918274] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:12.398 [2024-07-15 12:32:44.918378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64486 ] 00:08:12.398 { 00:08:12.398 "subsystems": [ 00:08:12.398 { 00:08:12.398 "subsystem": "bdev", 00:08:12.398 "config": [ 00:08:12.398 { 00:08:12.398 "params": { 00:08:12.398 "block_size": 4096, 00:08:12.398 "filename": "dd_sparse_aio_disk", 00:08:12.398 "name": "dd_aio" 00:08:12.398 }, 00:08:12.398 "method": "bdev_aio_create" 00:08:12.398 }, 00:08:12.398 { 00:08:12.398 "method": "bdev_wait_for_examine" 00:08:12.398 } 00:08:12.398 ] 00:08:12.398 } 00:08:12.398 ] 00:08:12.398 } 00:08:12.398 [2024-07-15 12:32:45.057964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.657 [2024-07-15 12:32:45.183703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.657 [2024-07-15 12:32:45.239567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.916  Copying: 12/36 [MB] (average 923 MBps) 00:08:12.916 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:12.916 00:08:12.916 real 0m0.719s 00:08:12.916 user 0m0.459s 00:08:12.916 sys 0m0.360s 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.916 ************************************ 00:08:12.916 END TEST dd_sparse_bdev_to_file 00:08:12.916 ************************************ 00:08:12.916 12:32:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:13.176 12:32:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:13.176 12:32:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:13.176 12:32:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:13.176 12:32:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:13.176 12:32:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:13.176 12:32:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:13.176 00:08:13.176 real 0m2.448s 00:08:13.176 user 0m1.496s 00:08:13.176 sys 0m1.254s 00:08:13.176 12:32:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.176 12:32:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:13.176 ************************************ 00:08:13.176 END TEST spdk_dd_sparse 00:08:13.176 ************************************ 00:08:13.176 12:32:45 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:13.176 12:32:45 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:13.176 12:32:45 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.176 12:32:45 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.176 12:32:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:13.176 ************************************ 00:08:13.176 START TEST spdk_dd_negative 00:08:13.176 ************************************ 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:13.176 * Looking for test storage... 00:08:13.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:13.176 ************************************ 00:08:13.176 START TEST dd_invalid_arguments 00:08:13.176 ************************************ 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:08:13.176 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:13.177 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.177 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.177 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.177 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.177 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.177 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.177 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.177 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.177 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:13.177 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:13.177 00:08:13.177 CPU options: 00:08:13.177 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:13.177 (like [0,1,10]) 00:08:13.177 --lcores lcore to CPU mapping list. The list is in the format: 00:08:13.177 [<,lcores[@CPUs]>...] 00:08:13.177 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:13.177 Within the group, '-' is used for range separator, 00:08:13.177 ',' is used for single number separator. 00:08:13.177 '( )' can be omitted for single element group, 00:08:13.177 '@' can be omitted if cpus and lcores have the same value 00:08:13.177 --disable-cpumask-locks Disable CPU core lock files. 00:08:13.177 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:13.177 pollers in the app support interrupt mode) 00:08:13.177 -p, --main-core main (primary) core for DPDK 00:08:13.177 00:08:13.177 Configuration options: 00:08:13.177 -c, --config, --json JSON config file 00:08:13.177 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:13.177 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:13.177 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:13.177 --rpcs-allowed comma-separated list of permitted RPCS 00:08:13.177 --json-ignore-init-errors don't exit on invalid config entry 00:08:13.177 00:08:13.177 Memory options: 00:08:13.177 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:13.177 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:13.177 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:13.177 -R, --huge-unlink unlink huge files after initialization 00:08:13.177 -n, --mem-channels number of memory channels used for DPDK 00:08:13.177 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:13.177 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:13.177 --no-huge run without using hugepages 00:08:13.177 -i, --shm-id shared memory ID (optional) 00:08:13.177 -g, --single-file-segments force creating just one hugetlbfs file 00:08:13.177 00:08:13.177 PCI options: 00:08:13.177 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:13.177 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:13.177 -u, --no-pci disable PCI access 00:08:13.177 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:13.177 00:08:13.177 Log options: 00:08:13.177 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:13.177 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:13.177 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:13.177 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:13.177 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:13.177 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:13.177 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:13.177 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:13.177 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:13.177 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:13.177 virtio_vfio_user, vmd) 00:08:13.177 --silence-noticelog disable notice level logging to stderr 00:08:13.177 00:08:13.177 Trace options: 00:08:13.177 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:13.177 setting 0 to disable trace (default 32768) 00:08:13.177 Tracepoints vary in size and can use more than one trace entry. 00:08:13.177 -e, --tpoint-group [:] 00:08:13.177 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:13.177 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:13.177 [2024-07-15 12:32:45.847462] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:13.437 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:13.437 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:13.437 a tracepoint group. First tpoint inside a group can be enabled by 00:08:13.437 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:13.437 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:13.437 in /include/spdk_internal/trace_defs.h 00:08:13.437 00:08:13.437 Other options: 00:08:13.437 -h, --help show this usage 00:08:13.437 -v, --version print SPDK version 00:08:13.437 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:13.437 --env-context Opaque context for use of the env implementation 00:08:13.437 00:08:13.437 Application specific: 00:08:13.437 [--------- DD Options ---------] 00:08:13.437 --if Input file. Must specify either --if or --ib. 00:08:13.437 --ib Input bdev. Must specifier either --if or --ib 00:08:13.437 --of Output file. Must specify either --of or --ob. 00:08:13.437 --ob Output bdev. Must specify either --of or --ob. 00:08:13.437 --iflag Input file flags. 00:08:13.437 --oflag Output file flags. 00:08:13.437 --bs I/O unit size (default: 4096) 00:08:13.437 --qd Queue depth (default: 2) 00:08:13.437 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:13.437 --skip Skip this many I/O units at start of input. (default: 0) 00:08:13.437 --seek Skip this many I/O units at start of output. (default: 0) 00:08:13.437 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:13.437 --sparse Enable hole skipping in input target 00:08:13.437 Available iflag and oflag values: 00:08:13.437 append - append mode 00:08:13.437 direct - use direct I/O for data 00:08:13.437 directory - fail unless a directory 00:08:13.437 dsync - use synchronized I/O for data 00:08:13.437 noatime - do not update access time 00:08:13.437 noctty - do not assign controlling terminal from file 00:08:13.437 nofollow - do not follow symlinks 00:08:13.437 nonblock - use non-blocking I/O 00:08:13.437 sync - use synchronized I/O for data and metadata 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.437 00:08:13.437 real 0m0.065s 00:08:13.437 user 0m0.037s 00:08:13.437 sys 0m0.026s 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:13.437 ************************************ 00:08:13.437 END TEST dd_invalid_arguments 00:08:13.437 ************************************ 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:13.437 ************************************ 00:08:13.437 START TEST dd_double_input 00:08:13.437 ************************************ 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.437 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:13.437 [2024-07-15 12:32:45.959187] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:13.438 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:08:13.438 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.438 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.438 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.438 00:08:13.438 real 0m0.064s 00:08:13.438 user 0m0.035s 00:08:13.438 sys 0m0.028s 00:08:13.438 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.438 12:32:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:13.438 ************************************ 00:08:13.438 END TEST dd_double_input 00:08:13.438 ************************************ 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:13.438 ************************************ 00:08:13.438 START TEST dd_double_output 00:08:13.438 ************************************ 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:13.438 [2024-07-15 12:32:46.087808] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.438 00:08:13.438 real 0m0.086s 00:08:13.438 user 0m0.055s 00:08:13.438 sys 0m0.029s 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.438 12:32:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:13.438 ************************************ 00:08:13.438 END TEST dd_double_output 00:08:13.438 ************************************ 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:13.697 ************************************ 00:08:13.697 START TEST dd_no_input 00:08:13.697 ************************************ 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:13.697 [2024-07-15 12:32:46.213642] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.697 00:08:13.697 real 0m0.073s 00:08:13.697 user 0m0.047s 00:08:13.697 sys 0m0.025s 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.697 ************************************ 00:08:13.697 END TEST dd_no_input 00:08:13.697 ************************************ 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:13.697 ************************************ 00:08:13.697 START TEST dd_no_output 00:08:13.697 ************************************ 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.697 [2024-07-15 12:32:46.330360] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.697 00:08:13.697 real 0m0.065s 00:08:13.697 user 0m0.039s 00:08:13.697 sys 0m0.026s 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.697 ************************************ 00:08:13.697 END TEST dd_no_output 00:08:13.697 12:32:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:13.697 ************************************ 00:08:13.956 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:13.956 12:32:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:13.956 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.956 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.956 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:13.956 ************************************ 00:08:13.956 START TEST dd_wrong_blocksize 00:08:13.956 ************************************ 00:08:13.956 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:13.957 [2024-07-15 12:32:46.443195] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.957 00:08:13.957 real 0m0.065s 00:08:13.957 user 0m0.044s 00:08:13.957 sys 0m0.021s 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.957 ************************************ 00:08:13.957 END TEST dd_wrong_blocksize 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:13.957 ************************************ 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:13.957 ************************************ 00:08:13.957 START TEST dd_smaller_blocksize 00:08:13.957 ************************************ 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.957 12:32:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:13.957 [2024-07-15 12:32:46.567768] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:13.957 [2024-07-15 12:32:46.567876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64699 ] 00:08:14.215 [2024-07-15 12:32:46.702334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.215 [2024-07-15 12:32:46.843191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.215 [2024-07-15 12:32:46.897589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.783 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:14.783 [2024-07-15 12:32:47.209403] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:14.783 [2024-07-15 12:32:47.209466] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.783 [2024-07-15 12:32:47.326446] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:14.783 12:32:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:08:14.783 12:32:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:14.783 12:32:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:08:14.783 12:32:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:08:14.783 12:32:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:08:14.783 12:32:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:14.783 00:08:14.783 real 0m0.920s 00:08:14.783 user 0m0.443s 00:08:14.783 sys 0m0.369s 00:08:14.783 12:32:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.783 12:32:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:14.783 ************************************ 00:08:14.783 END TEST dd_smaller_blocksize 00:08:14.783 ************************************ 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.042 ************************************ 00:08:15.042 START TEST dd_invalid_count 00:08:15.042 ************************************ 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:15.042 [2024-07-15 12:32:47.534973] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.042 00:08:15.042 real 0m0.074s 00:08:15.042 user 0m0.042s 00:08:15.042 sys 0m0.031s 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:15.042 ************************************ 00:08:15.042 END TEST dd_invalid_count 00:08:15.042 ************************************ 00:08:15.042 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.043 ************************************ 00:08:15.043 START TEST dd_invalid_oflag 00:08:15.043 ************************************ 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:15.043 [2024-07-15 12:32:47.656642] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.043 00:08:15.043 real 0m0.075s 00:08:15.043 user 0m0.047s 00:08:15.043 sys 0m0.027s 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:15.043 ************************************ 00:08:15.043 END TEST dd_invalid_oflag 00:08:15.043 ************************************ 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.043 ************************************ 00:08:15.043 START TEST dd_invalid_iflag 00:08:15.043 ************************************ 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.043 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:15.302 [2024-07-15 12:32:47.788378] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.302 00:08:15.302 real 0m0.098s 00:08:15.302 user 0m0.055s 00:08:15.302 sys 0m0.039s 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:15.302 ************************************ 00:08:15.302 END TEST dd_invalid_iflag 00:08:15.302 ************************************ 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.302 ************************************ 00:08:15.302 START TEST dd_unknown_flag 00:08:15.302 ************************************ 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.302 12:32:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:15.302 [2024-07-15 12:32:47.909484] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:15.302 [2024-07-15 12:32:47.909567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64796 ] 00:08:15.561 [2024-07-15 12:32:48.047407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.561 [2024-07-15 12:32:48.181171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.561 [2024-07-15 12:32:48.241743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.820 [2024-07-15 12:32:48.280181] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:15.820 [2024-07-15 12:32:48.280245] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.820 [2024-07-15 12:32:48.280312] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:15.820 [2024-07-15 12:32:48.280329] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.820 [2024-07-15 12:32:48.280639] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:15.820 [2024-07-15 12:32:48.280660] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.820 [2024-07-15 12:32:48.280747] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:15.820 [2024-07-15 12:32:48.280769] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:15.820 [2024-07-15 12:32:48.401584] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:16.079 00:08:16.079 real 0m0.645s 00:08:16.079 user 0m0.389s 00:08:16.079 sys 0m0.158s 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.079 ************************************ 00:08:16.079 END TEST dd_unknown_flag 00:08:16.079 ************************************ 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:16.079 ************************************ 00:08:16.079 START TEST dd_invalid_json 00:08:16.079 ************************************ 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.079 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:16.079 [2024-07-15 12:32:48.616376] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:16.079 [2024-07-15 12:32:48.616480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64825 ] 00:08:16.079 [2024-07-15 12:32:48.750623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.338 [2024-07-15 12:32:48.864076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.338 [2024-07-15 12:32:48.864174] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:16.338 [2024-07-15 12:32:48.864225] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:16.338 [2024-07-15 12:32:48.864235] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.338 [2024-07-15 12:32:48.864281] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:16.338 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:08:16.338 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:16.338 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:08:16.338 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:08:16.338 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:08:16.338 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:16.338 00:08:16.338 real 0m0.406s 00:08:16.338 user 0m0.219s 00:08:16.338 sys 0m0.085s 00:08:16.338 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.338 ************************************ 00:08:16.338 END TEST dd_invalid_json 00:08:16.338 ************************************ 00:08:16.338 12:32:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 12:32:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:16.338 00:08:16.338 real 0m3.313s 00:08:16.338 user 0m1.663s 00:08:16.338 sys 0m1.288s 00:08:16.338 12:32:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.338 12:32:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 ************************************ 00:08:16.338 END TEST spdk_dd_negative 00:08:16.338 ************************************ 00:08:16.599 12:32:49 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:16.599 00:08:16.599 real 1m20.366s 00:08:16.599 user 0m52.372s 00:08:16.599 sys 0m34.639s 00:08:16.599 12:32:49 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.599 12:32:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:16.599 ************************************ 00:08:16.599 END TEST spdk_dd 00:08:16.599 ************************************ 00:08:16.599 12:32:49 -- common/autotest_common.sh@1142 -- # return 0 00:08:16.599 12:32:49 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:16.599 12:32:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:16.599 12:32:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:16.599 12:32:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.599 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:08:16.599 12:32:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:16.599 12:32:49 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:16.599 12:32:49 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:16.599 12:32:49 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:16.599 12:32:49 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:16.599 12:32:49 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:16.599 12:32:49 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:16.599 12:32:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.599 12:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.599 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:08:16.599 ************************************ 00:08:16.599 START TEST nvmf_tcp 00:08:16.599 ************************************ 00:08:16.599 12:32:49 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:16.599 * Looking for test storage... 00:08:16.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.599 12:32:49 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.599 12:32:49 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.599 12:32:49 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.599 12:32:49 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.599 12:32:49 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.599 12:32:49 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.599 12:32:49 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:16.599 12:32:49 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:16.599 12:32:49 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.599 12:32:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:16.599 12:32:49 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:16.599 12:32:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.599 12:32:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.599 12:32:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.599 ************************************ 00:08:16.599 START TEST nvmf_host_management 00:08:16.599 ************************************ 00:08:16.599 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:16.859 * Looking for test storage... 00:08:16.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:16.859 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:16.860 Cannot find device "nvmf_init_br" 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:16.860 Cannot find device "nvmf_tgt_br" 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:16.860 Cannot find device "nvmf_tgt_br2" 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:16.860 Cannot find device "nvmf_init_br" 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:16.860 Cannot find device "nvmf_tgt_br" 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:16.860 Cannot find device "nvmf_tgt_br2" 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:16.860 Cannot find device "nvmf_br" 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:16.860 Cannot find device "nvmf_init_if" 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:16.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:16.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:16.860 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:17.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:08:17.119 00:08:17.119 --- 10.0.0.2 ping statistics --- 00:08:17.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.119 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:17.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:17.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:08:17.119 00:08:17.119 --- 10.0.0.3 ping statistics --- 00:08:17.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.119 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:17.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:17.119 00:08:17.119 --- 10.0.0.1 ping statistics --- 00:08:17.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.119 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65085 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65085 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65085 ']' 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.119 12:32:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.377 [2024-07-15 12:32:49.818263] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:17.377 [2024-07-15 12:32:49.818355] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.377 [2024-07-15 12:32:49.963691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.633 [2024-07-15 12:32:50.103870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.633 [2024-07-15 12:32:50.104183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.633 [2024-07-15 12:32:50.104364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.633 [2024-07-15 12:32:50.104514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.633 [2024-07-15 12:32:50.104565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.633 [2024-07-15 12:32:50.104915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.633 [2024-07-15 12:32:50.105063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.634 [2024-07-15 12:32:50.105319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:17.634 [2024-07-15 12:32:50.105332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.634 [2024-07-15 12:32:50.166572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.198 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.198 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:18.198 12:32:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.198 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.198 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.198 12:32:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.198 12:32:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.457 [2024-07-15 12:32:50.885298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.457 Malloc0 00:08:18.457 [2024-07-15 12:32:50.964379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.457 12:32:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65150 00:08:18.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65150 /var/tmp/bdevperf.sock 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65150 ']' 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:18.457 { 00:08:18.457 "params": { 00:08:18.457 "name": "Nvme$subsystem", 00:08:18.457 "trtype": "$TEST_TRANSPORT", 00:08:18.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.457 "adrfam": "ipv4", 00:08:18.457 "trsvcid": "$NVMF_PORT", 00:08:18.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.457 "hdgst": ${hdgst:-false}, 00:08:18.457 "ddgst": ${ddgst:-false} 00:08:18.457 }, 00:08:18.457 "method": "bdev_nvme_attach_controller" 00:08:18.457 } 00:08:18.457 EOF 00:08:18.457 )") 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:18.457 12:32:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:18.457 "params": { 00:08:18.457 "name": "Nvme0", 00:08:18.457 "trtype": "tcp", 00:08:18.457 "traddr": "10.0.0.2", 00:08:18.457 "adrfam": "ipv4", 00:08:18.457 "trsvcid": "4420", 00:08:18.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:18.457 "hdgst": false, 00:08:18.457 "ddgst": false 00:08:18.457 }, 00:08:18.457 "method": "bdev_nvme_attach_controller" 00:08:18.457 }' 00:08:18.457 [2024-07-15 12:32:51.061057] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:18.457 [2024-07-15 12:32:51.061151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65150 ] 00:08:18.716 [2024-07-15 12:32:51.203129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.716 [2024-07-15 12:32:51.323462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.716 [2024-07-15 12:32:51.391279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.974 Running I/O for 10 seconds... 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.543 12:32:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:19.543 [2024-07-15 12:32:52.146031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.543 [2024-07-15 12:32:52.146455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.543 [2024-07-15 12:32:52.146465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.146981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.146993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.544 [2024-07-15 12:32:52.147404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.544 [2024-07-15 12:32:52.147416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.545 [2024-07-15 12:32:52.147426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.545 [2024-07-15 12:32:52.147437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.545 [2024-07-15 12:32:52.147447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.545 [2024-07-15 12:32:52.147458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.545 [2024-07-15 12:32:52.147468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.545 [2024-07-15 12:32:52.147479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.545 [2024-07-15 12:32:52.147489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.545 [2024-07-15 12:32:52.147500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.545 [2024-07-15 12:32:52.147510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.545 [2024-07-15 12:32:52.147540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1034ec0 is same with the state(5) to be set 00:08:19.545 [2024-07-15 12:32:52.147607] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1034ec0 was disconnected and freed. reset controller. 00:08:19.545 [2024-07-15 12:32:52.147701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:19.545 [2024-07-15 12:32:52.147718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.545 [2024-07-15 12:32:52.147742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:19.545 [2024-07-15 12:32:52.147755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.545 [2024-07-15 12:32:52.147765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:19.545 [2024-07-15 12:32:52.147775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.545 [2024-07-15 12:32:52.147785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:19.545 [2024-07-15 12:32:52.147803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.545 [2024-07-15 12:32:52.147812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102cd50 is same with the state(5) to be set 00:08:19.545 [2024-07-15 12:32:52.148892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:19.545 task offset: 122880 on job bdev=Nvme0n1 fails 00:08:19.545 00:08:19.545 Latency(us) 00:08:19.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.545 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:19.545 Job: Nvme0n1 ended in about 0.64 seconds with error 00:08:19.545 Verification LBA range: start 0x0 length 0x400 00:08:19.545 Nvme0n1 : 0.64 1501.84 93.86 100.12 0.00 38760.57 2204.39 42181.35 00:08:19.545 =================================================================================================================== 00:08:19.545 Total : 1501.84 93.86 100.12 0.00 38760.57 2204.39 42181.35 00:08:19.545 [2024-07-15 12:32:52.150815] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.545 [2024-07-15 12:32:52.150842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102cd50 (9): Bad file descriptor 00:08:19.545 [2024-07-15 12:32:52.156721] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65150 00:08:20.481 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65150) - No such process 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:20.481 { 00:08:20.481 "params": { 00:08:20.481 "name": "Nvme$subsystem", 00:08:20.481 "trtype": "$TEST_TRANSPORT", 00:08:20.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.481 "adrfam": "ipv4", 00:08:20.481 "trsvcid": "$NVMF_PORT", 00:08:20.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.481 "hdgst": ${hdgst:-false}, 00:08:20.481 "ddgst": ${ddgst:-false} 00:08:20.481 }, 00:08:20.481 "method": "bdev_nvme_attach_controller" 00:08:20.481 } 00:08:20.481 EOF 00:08:20.481 )") 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:20.481 12:32:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:20.481 "params": { 00:08:20.481 "name": "Nvme0", 00:08:20.481 "trtype": "tcp", 00:08:20.481 "traddr": "10.0.0.2", 00:08:20.481 "adrfam": "ipv4", 00:08:20.481 "trsvcid": "4420", 00:08:20.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:20.481 "hdgst": false, 00:08:20.481 "ddgst": false 00:08:20.481 }, 00:08:20.481 "method": "bdev_nvme_attach_controller" 00:08:20.481 }' 00:08:20.746 [2024-07-15 12:32:53.203588] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:20.746 [2024-07-15 12:32:53.204365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65188 ] 00:08:20.746 [2024-07-15 12:32:53.346797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.004 [2024-07-15 12:32:53.461583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.004 [2024-07-15 12:32:53.526811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.004 Running I/O for 1 seconds... 00:08:22.380 00:08:22.380 Latency(us) 00:08:22.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.380 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:22.380 Verification LBA range: start 0x0 length 0x400 00:08:22.380 Nvme0n1 : 1.04 1603.72 100.23 0.00 0.00 39135.39 4110.89 37176.79 00:08:22.380 =================================================================================================================== 00:08:22.380 Total : 1603.72 100.23 0.00 0.00 39135.39 4110.89 37176.79 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.380 12:32:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.380 rmmod nvme_tcp 00:08:22.380 rmmod nvme_fabrics 00:08:22.380 rmmod nvme_keyring 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65085 ']' 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65085 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65085 ']' 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65085 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65085 00:08:22.380 killing process with pid 65085 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65085' 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65085 00:08:22.380 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65085 00:08:22.638 [2024-07-15 12:32:55.275094] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:22.638 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.638 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.638 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.638 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.638 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.638 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.638 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.638 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.898 12:32:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:22.898 12:32:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:22.898 00:08:22.898 real 0m6.094s 00:08:22.898 user 0m23.510s 00:08:22.898 sys 0m1.564s 00:08:22.898 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.898 ************************************ 00:08:22.898 END TEST nvmf_host_management 00:08:22.898 ************************************ 00:08:22.898 12:32:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.898 12:32:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:22.898 12:32:55 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:22.898 12:32:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.898 12:32:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.898 12:32:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.898 ************************************ 00:08:22.898 START TEST nvmf_lvol 00:08:22.898 ************************************ 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:22.898 * Looking for test storage... 00:08:22.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:22.898 Cannot find device "nvmf_tgt_br" 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:22.898 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:22.899 Cannot find device "nvmf_tgt_br2" 00:08:22.899 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:22.899 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:22.899 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:22.899 Cannot find device "nvmf_tgt_br" 00:08:22.899 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:22.899 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:22.899 Cannot find device "nvmf_tgt_br2" 00:08:22.899 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:22.899 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:23.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:23.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:23.157 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:23.158 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:23.158 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:23.158 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:23.158 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:23.158 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:23.158 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:23.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:08:23.416 00:08:23.416 --- 10.0.0.2 ping statistics --- 00:08:23.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.416 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:23.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:23.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:08:23.416 00:08:23.416 --- 10.0.0.3 ping statistics --- 00:08:23.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.416 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:23.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:23.416 00:08:23.416 --- 10.0.0.1 ping statistics --- 00:08:23.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.416 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65395 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65395 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65395 ']' 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.416 12:32:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.416 [2024-07-15 12:32:55.963141] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:23.416 [2024-07-15 12:32:55.963271] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.674 [2024-07-15 12:32:56.103555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.674 [2024-07-15 12:32:56.223099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.674 [2024-07-15 12:32:56.223207] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.674 [2024-07-15 12:32:56.223241] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.674 [2024-07-15 12:32:56.223259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.674 [2024-07-15 12:32:56.223275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.674 [2024-07-15 12:32:56.223455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.674 [2024-07-15 12:32:56.224102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.674 [2024-07-15 12:32:56.224123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.674 [2024-07-15 12:32:56.289977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:24.241 12:32:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.241 12:32:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:24.241 12:32:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.241 12:32:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:24.241 12:32:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:24.499 12:32:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.499 12:32:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:24.499 [2024-07-15 12:32:57.180928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.757 12:32:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.016 12:32:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:25.016 12:32:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.275 12:32:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:25.275 12:32:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:25.592 12:32:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:25.870 12:32:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=04baf302-fd20-40b8-b03a-c2f2b443c8c4 00:08:25.870 12:32:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 04baf302-fd20-40b8-b03a-c2f2b443c8c4 lvol 20 00:08:26.128 12:32:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=989e66ca-9b33-473c-9ec2-b7fe76ed73bb 00:08:26.128 12:32:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.387 12:32:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 989e66ca-9b33-473c-9ec2-b7fe76ed73bb 00:08:26.645 12:32:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.904 [2024-07-15 12:32:59.345708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.904 12:32:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.904 12:32:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65476 00:08:26.904 12:32:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:26.904 12:32:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:28.279 12:33:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 989e66ca-9b33-473c-9ec2-b7fe76ed73bb MY_SNAPSHOT 00:08:28.279 12:33:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d440832b-823e-4469-9611-5857a12a42b3 00:08:28.279 12:33:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 989e66ca-9b33-473c-9ec2-b7fe76ed73bb 30 00:08:28.846 12:33:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d440832b-823e-4469-9611-5857a12a42b3 MY_CLONE 00:08:28.846 12:33:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b3e7902d-f84c-4e1d-a870-5d32a3ebe4bc 00:08:28.846 12:33:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b3e7902d-f84c-4e1d-a870-5d32a3ebe4bc 00:08:29.414 12:33:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65476 00:08:37.577 Initializing NVMe Controllers 00:08:37.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:37.577 Controller IO queue size 128, less than required. 00:08:37.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:37.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:37.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:37.577 Initialization complete. Launching workers. 00:08:37.577 ======================================================== 00:08:37.577 Latency(us) 00:08:37.577 Device Information : IOPS MiB/s Average min max 00:08:37.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10323.14 40.32 12400.66 2220.07 74117.05 00:08:37.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10383.24 40.56 12326.18 1803.90 79702.41 00:08:37.577 ======================================================== 00:08:37.577 Total : 20706.38 80.88 12363.31 1803.90 79702.41 00:08:37.577 00:08:37.577 12:33:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:37.577 12:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 989e66ca-9b33-473c-9ec2-b7fe76ed73bb 00:08:37.834 12:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04baf302-fd20-40b8-b03a-c2f2b443c8c4 00:08:38.092 12:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:38.092 12:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:38.092 12:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:38.092 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.092 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:38.092 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.092 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:38.092 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.092 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.092 rmmod nvme_tcp 00:08:38.403 rmmod nvme_fabrics 00:08:38.403 rmmod nvme_keyring 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65395 ']' 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65395 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65395 ']' 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65395 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65395 00:08:38.403 killing process with pid 65395 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65395' 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65395 00:08:38.403 12:33:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65395 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:38.663 00:08:38.663 real 0m15.763s 00:08:38.663 user 1m5.259s 00:08:38.663 sys 0m4.245s 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 ************************************ 00:08:38.663 END TEST nvmf_lvol 00:08:38.663 ************************************ 00:08:38.663 12:33:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:38.663 12:33:11 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:38.663 12:33:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:38.663 12:33:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.663 12:33:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 ************************************ 00:08:38.663 START TEST nvmf_lvs_grow 00:08:38.663 ************************************ 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:38.663 * Looking for test storage... 00:08:38.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.663 12:33:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:38.664 Cannot find device "nvmf_tgt_br" 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.664 Cannot find device "nvmf_tgt_br2" 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:38.664 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:38.922 Cannot find device "nvmf_tgt_br" 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:38.922 Cannot find device "nvmf_tgt_br2" 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.922 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:39.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:08:39.181 00:08:39.181 --- 10.0.0.2 ping statistics --- 00:08:39.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.181 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:39.181 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.181 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:08:39.181 00:08:39.181 --- 10.0.0.3 ping statistics --- 00:08:39.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.181 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:39.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:39.181 00:08:39.181 --- 10.0.0.1 ping statistics --- 00:08:39.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.181 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65799 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65799 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65799 ']' 00:08:39.181 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.182 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.182 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.182 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.182 12:33:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.182 [2024-07-15 12:33:11.740395] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:39.182 [2024-07-15 12:33:11.741296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.440 [2024-07-15 12:33:11.881902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.440 [2024-07-15 12:33:11.998924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.440 [2024-07-15 12:33:11.998977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.440 [2024-07-15 12:33:11.999006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.440 [2024-07-15 12:33:11.999015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.440 [2024-07-15 12:33:11.999023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.440 [2024-07-15 12:33:11.999055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.440 [2024-07-15 12:33:12.053990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:40.377 12:33:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.377 12:33:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:40.377 12:33:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.377 12:33:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.377 12:33:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.377 12:33:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.377 12:33:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:40.377 [2024-07-15 12:33:13.025787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.377 12:33:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:40.377 12:33:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:40.377 12:33:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.377 12:33:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.377 ************************************ 00:08:40.377 START TEST lvs_grow_clean 00:08:40.377 ************************************ 00:08:40.377 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:40.378 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:40.648 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:40.648 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:40.648 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:40.648 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:40.648 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:40.648 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:40.648 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:40.648 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.911 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:40.911 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:41.169 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:41.169 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:41.169 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:41.428 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:41.428 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:41.428 12:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 lvol 150 00:08:41.686 12:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fb2b2481-6d60-4d29-85ba-caf363e7162d 00:08:41.686 12:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:41.686 12:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:41.976 [2024-07-15 12:33:14.374024] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:41.976 [2024-07-15 12:33:14.374130] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:41.976 true 00:08:41.976 12:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:41.976 12:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:41.976 12:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:41.976 12:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:42.235 12:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb2b2481-6d60-4d29-85ba-caf363e7162d 00:08:42.494 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:42.753 [2024-07-15 12:33:15.414590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.012 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65887 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65887 /var/tmp/bdevperf.sock 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65887 ']' 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.271 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:43.271 [2024-07-15 12:33:15.762297] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:43.271 [2024-07-15 12:33:15.762688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65887 ] 00:08:43.271 [2024-07-15 12:33:15.904345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.529 [2024-07-15 12:33:16.034124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.529 [2024-07-15 12:33:16.088864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:44.096 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.096 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:44.096 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:44.663 Nvme0n1 00:08:44.663 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:44.921 [ 00:08:44.921 { 00:08:44.921 "name": "Nvme0n1", 00:08:44.921 "aliases": [ 00:08:44.921 "fb2b2481-6d60-4d29-85ba-caf363e7162d" 00:08:44.921 ], 00:08:44.921 "product_name": "NVMe disk", 00:08:44.921 "block_size": 4096, 00:08:44.921 "num_blocks": 38912, 00:08:44.921 "uuid": "fb2b2481-6d60-4d29-85ba-caf363e7162d", 00:08:44.921 "assigned_rate_limits": { 00:08:44.921 "rw_ios_per_sec": 0, 00:08:44.921 "rw_mbytes_per_sec": 0, 00:08:44.921 "r_mbytes_per_sec": 0, 00:08:44.921 "w_mbytes_per_sec": 0 00:08:44.921 }, 00:08:44.921 "claimed": false, 00:08:44.921 "zoned": false, 00:08:44.922 "supported_io_types": { 00:08:44.922 "read": true, 00:08:44.922 "write": true, 00:08:44.922 "unmap": true, 00:08:44.922 "flush": true, 00:08:44.922 "reset": true, 00:08:44.922 "nvme_admin": true, 00:08:44.922 "nvme_io": true, 00:08:44.922 "nvme_io_md": false, 00:08:44.922 "write_zeroes": true, 00:08:44.922 "zcopy": false, 00:08:44.922 "get_zone_info": false, 00:08:44.922 "zone_management": false, 00:08:44.922 "zone_append": false, 00:08:44.922 "compare": true, 00:08:44.922 "compare_and_write": true, 00:08:44.922 "abort": true, 00:08:44.922 "seek_hole": false, 00:08:44.922 "seek_data": false, 00:08:44.922 "copy": true, 00:08:44.922 "nvme_iov_md": false 00:08:44.922 }, 00:08:44.922 "memory_domains": [ 00:08:44.922 { 00:08:44.922 "dma_device_id": "system", 00:08:44.922 "dma_device_type": 1 00:08:44.922 } 00:08:44.922 ], 00:08:44.922 "driver_specific": { 00:08:44.922 "nvme": [ 00:08:44.922 { 00:08:44.922 "trid": { 00:08:44.922 "trtype": "TCP", 00:08:44.922 "adrfam": "IPv4", 00:08:44.922 "traddr": "10.0.0.2", 00:08:44.922 "trsvcid": "4420", 00:08:44.922 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:44.922 }, 00:08:44.922 "ctrlr_data": { 00:08:44.922 "cntlid": 1, 00:08:44.922 "vendor_id": "0x8086", 00:08:44.922 "model_number": "SPDK bdev Controller", 00:08:44.922 "serial_number": "SPDK0", 00:08:44.922 "firmware_revision": "24.09", 00:08:44.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:44.922 "oacs": { 00:08:44.922 "security": 0, 00:08:44.922 "format": 0, 00:08:44.922 "firmware": 0, 00:08:44.922 "ns_manage": 0 00:08:44.922 }, 00:08:44.922 "multi_ctrlr": true, 00:08:44.922 "ana_reporting": false 00:08:44.922 }, 00:08:44.922 "vs": { 00:08:44.922 "nvme_version": "1.3" 00:08:44.922 }, 00:08:44.922 "ns_data": { 00:08:44.922 "id": 1, 00:08:44.922 "can_share": true 00:08:44.922 } 00:08:44.922 } 00:08:44.922 ], 00:08:44.922 "mp_policy": "active_passive" 00:08:44.922 } 00:08:44.922 } 00:08:44.922 ] 00:08:44.922 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:44.922 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65905 00:08:44.922 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:44.922 Running I/O for 10 seconds... 00:08:45.858 Latency(us) 00:08:45.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.858 Nvme0n1 : 1.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:45.858 =================================================================================================================== 00:08:45.858 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:45.858 00:08:46.791 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:47.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.049 Nvme0n1 : 2.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:47.049 =================================================================================================================== 00:08:47.049 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:47.049 00:08:47.049 true 00:08:47.049 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:47.049 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:47.307 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:47.307 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:47.307 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65905 00:08:47.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.873 Nvme0n1 : 3.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:47.873 =================================================================================================================== 00:08:47.873 Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:47.873 00:08:49.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.248 Nvme0n1 : 4.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:49.248 =================================================================================================================== 00:08:49.248 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:49.248 00:08:49.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.815 Nvme0n1 : 5.00 7467.60 29.17 0.00 0.00 0.00 0.00 0.00 00:08:49.815 =================================================================================================================== 00:08:49.815 Total : 7467.60 29.17 0.00 0.00 0.00 0.00 0.00 00:08:49.815 00:08:51.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.193 Nvme0n1 : 6.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:51.193 =================================================================================================================== 00:08:51.193 Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:51.193 00:08:52.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.137 Nvme0n1 : 7.00 7420.43 28.99 0.00 0.00 0.00 0.00 0.00 00:08:52.137 =================================================================================================================== 00:08:52.137 Total : 7420.43 28.99 0.00 0.00 0.00 0.00 0.00 00:08:52.137 00:08:53.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.073 Nvme0n1 : 8.00 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:08:53.073 =================================================================================================================== 00:08:53.073 Total : 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:08:53.073 00:08:54.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.010 Nvme0n1 : 9.00 7210.78 28.17 0.00 0.00 0.00 0.00 0.00 00:08:54.010 =================================================================================================================== 00:08:54.010 Total : 7210.78 28.17 0.00 0.00 0.00 0.00 0.00 00:08:54.010 00:08:54.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.946 Nvme0n1 : 10.00 7150.10 27.93 0.00 0.00 0.00 0.00 0.00 00:08:54.946 =================================================================================================================== 00:08:54.946 Total : 7150.10 27.93 0.00 0.00 0.00 0.00 0.00 00:08:54.946 00:08:54.946 00:08:54.946 Latency(us) 00:08:54.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.946 Nvme0n1 : 10.01 7154.85 27.95 0.00 0.00 17884.79 14298.76 54335.30 00:08:54.946 =================================================================================================================== 00:08:54.946 Total : 7154.85 27.95 0.00 0.00 17884.79 14298.76 54335.30 00:08:54.946 0 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65887 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65887 ']' 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65887 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65887 00:08:54.946 killing process with pid 65887 00:08:54.946 Received shutdown signal, test time was about 10.000000 seconds 00:08:54.946 00:08:54.946 Latency(us) 00:08:54.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.946 =================================================================================================================== 00:08:54.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65887' 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65887 00:08:54.946 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65887 00:08:55.205 12:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.464 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:55.723 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:55.723 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:55.982 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:55.982 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:55.982 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:56.241 [2024-07-15 12:33:28.855396] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:56.241 12:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:56.501 request: 00:08:56.501 { 00:08:56.501 "uuid": "b75891f3-517f-4b0e-af1b-f873cb0f52b7", 00:08:56.501 "method": "bdev_lvol_get_lvstores", 00:08:56.501 "req_id": 1 00:08:56.501 } 00:08:56.501 Got JSON-RPC error response 00:08:56.501 response: 00:08:56.501 { 00:08:56.501 "code": -19, 00:08:56.501 "message": "No such device" 00:08:56.501 } 00:08:56.501 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:56.501 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:56.501 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:56.501 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:56.501 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.760 aio_bdev 00:08:56.760 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fb2b2481-6d60-4d29-85ba-caf363e7162d 00:08:56.760 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=fb2b2481-6d60-4d29-85ba-caf363e7162d 00:08:56.760 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:56.760 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:56.760 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:56.760 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:56.760 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:57.018 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb2b2481-6d60-4d29-85ba-caf363e7162d -t 2000 00:08:57.278 [ 00:08:57.278 { 00:08:57.278 "name": "fb2b2481-6d60-4d29-85ba-caf363e7162d", 00:08:57.278 "aliases": [ 00:08:57.278 "lvs/lvol" 00:08:57.278 ], 00:08:57.278 "product_name": "Logical Volume", 00:08:57.278 "block_size": 4096, 00:08:57.278 "num_blocks": 38912, 00:08:57.278 "uuid": "fb2b2481-6d60-4d29-85ba-caf363e7162d", 00:08:57.278 "assigned_rate_limits": { 00:08:57.278 "rw_ios_per_sec": 0, 00:08:57.278 "rw_mbytes_per_sec": 0, 00:08:57.278 "r_mbytes_per_sec": 0, 00:08:57.278 "w_mbytes_per_sec": 0 00:08:57.278 }, 00:08:57.278 "claimed": false, 00:08:57.278 "zoned": false, 00:08:57.278 "supported_io_types": { 00:08:57.278 "read": true, 00:08:57.278 "write": true, 00:08:57.278 "unmap": true, 00:08:57.278 "flush": false, 00:08:57.278 "reset": true, 00:08:57.278 "nvme_admin": false, 00:08:57.278 "nvme_io": false, 00:08:57.278 "nvme_io_md": false, 00:08:57.278 "write_zeroes": true, 00:08:57.278 "zcopy": false, 00:08:57.278 "get_zone_info": false, 00:08:57.278 "zone_management": false, 00:08:57.278 "zone_append": false, 00:08:57.278 "compare": false, 00:08:57.278 "compare_and_write": false, 00:08:57.278 "abort": false, 00:08:57.278 "seek_hole": true, 00:08:57.278 "seek_data": true, 00:08:57.278 "copy": false, 00:08:57.278 "nvme_iov_md": false 00:08:57.278 }, 00:08:57.278 "driver_specific": { 00:08:57.278 "lvol": { 00:08:57.278 "lvol_store_uuid": "b75891f3-517f-4b0e-af1b-f873cb0f52b7", 00:08:57.278 "base_bdev": "aio_bdev", 00:08:57.278 "thin_provision": false, 00:08:57.278 "num_allocated_clusters": 38, 00:08:57.278 "snapshot": false, 00:08:57.278 "clone": false, 00:08:57.278 "esnap_clone": false 00:08:57.278 } 00:08:57.278 } 00:08:57.278 } 00:08:57.278 ] 00:08:57.278 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:57.278 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:57.278 12:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:57.537 12:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:57.537 12:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:57.537 12:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:57.796 12:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:57.796 12:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fb2b2481-6d60-4d29-85ba-caf363e7162d 00:08:58.055 12:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b75891f3-517f-4b0e-af1b-f873cb0f52b7 00:08:58.314 12:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.572 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:59.140 ************************************ 00:08:59.140 END TEST lvs_grow_clean 00:08:59.140 ************************************ 00:08:59.140 00:08:59.140 real 0m18.517s 00:08:59.140 user 0m17.341s 00:08:59.140 sys 0m2.721s 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.140 ************************************ 00:08:59.140 START TEST lvs_grow_dirty 00:08:59.140 ************************************ 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:59.140 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.399 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:59.399 12:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:59.657 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b8846638-89bc-4edd-81f9-45fd8b6a9727 00:08:59.657 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:08:59.657 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:59.914 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:59.914 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:59.914 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b8846638-89bc-4edd-81f9-45fd8b6a9727 lvol 150 00:09:00.172 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6a45e477-ca74-4776-bed4-21f114f70c85 00:09:00.172 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:00.172 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:00.429 [2024-07-15 12:33:32.905464] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:00.429 [2024-07-15 12:33:32.905548] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:00.429 true 00:09:00.429 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:00.429 12:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:00.686 12:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:00.686 12:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.944 12:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6a45e477-ca74-4776-bed4-21f114f70c85 00:09:01.202 12:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:01.460 [2024-07-15 12:33:33.962103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.460 12:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66159 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66159 /var/tmp/bdevperf.sock 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66159 ']' 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:01.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.718 12:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.718 [2024-07-15 12:33:34.300315] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:01.718 [2024-07-15 12:33:34.300413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66159 ] 00:09:01.977 [2024-07-15 12:33:34.437600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.977 [2024-07-15 12:33:34.531578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.977 [2024-07-15 12:33:34.584606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:02.911 12:33:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.911 12:33:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:02.911 12:33:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:03.168 Nvme0n1 00:09:03.168 12:33:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:03.426 [ 00:09:03.426 { 00:09:03.426 "name": "Nvme0n1", 00:09:03.426 "aliases": [ 00:09:03.426 "6a45e477-ca74-4776-bed4-21f114f70c85" 00:09:03.426 ], 00:09:03.426 "product_name": "NVMe disk", 00:09:03.426 "block_size": 4096, 00:09:03.426 "num_blocks": 38912, 00:09:03.426 "uuid": "6a45e477-ca74-4776-bed4-21f114f70c85", 00:09:03.426 "assigned_rate_limits": { 00:09:03.426 "rw_ios_per_sec": 0, 00:09:03.426 "rw_mbytes_per_sec": 0, 00:09:03.426 "r_mbytes_per_sec": 0, 00:09:03.426 "w_mbytes_per_sec": 0 00:09:03.426 }, 00:09:03.426 "claimed": false, 00:09:03.426 "zoned": false, 00:09:03.426 "supported_io_types": { 00:09:03.426 "read": true, 00:09:03.426 "write": true, 00:09:03.426 "unmap": true, 00:09:03.426 "flush": true, 00:09:03.426 "reset": true, 00:09:03.426 "nvme_admin": true, 00:09:03.426 "nvme_io": true, 00:09:03.426 "nvme_io_md": false, 00:09:03.426 "write_zeroes": true, 00:09:03.426 "zcopy": false, 00:09:03.426 "get_zone_info": false, 00:09:03.426 "zone_management": false, 00:09:03.426 "zone_append": false, 00:09:03.426 "compare": true, 00:09:03.426 "compare_and_write": true, 00:09:03.426 "abort": true, 00:09:03.426 "seek_hole": false, 00:09:03.426 "seek_data": false, 00:09:03.426 "copy": true, 00:09:03.426 "nvme_iov_md": false 00:09:03.426 }, 00:09:03.426 "memory_domains": [ 00:09:03.426 { 00:09:03.426 "dma_device_id": "system", 00:09:03.426 "dma_device_type": 1 00:09:03.426 } 00:09:03.426 ], 00:09:03.426 "driver_specific": { 00:09:03.426 "nvme": [ 00:09:03.426 { 00:09:03.426 "trid": { 00:09:03.426 "trtype": "TCP", 00:09:03.426 "adrfam": "IPv4", 00:09:03.426 "traddr": "10.0.0.2", 00:09:03.426 "trsvcid": "4420", 00:09:03.426 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:03.426 }, 00:09:03.426 "ctrlr_data": { 00:09:03.426 "cntlid": 1, 00:09:03.426 "vendor_id": "0x8086", 00:09:03.426 "model_number": "SPDK bdev Controller", 00:09:03.426 "serial_number": "SPDK0", 00:09:03.426 "firmware_revision": "24.09", 00:09:03.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:03.426 "oacs": { 00:09:03.426 "security": 0, 00:09:03.426 "format": 0, 00:09:03.426 "firmware": 0, 00:09:03.426 "ns_manage": 0 00:09:03.426 }, 00:09:03.426 "multi_ctrlr": true, 00:09:03.426 "ana_reporting": false 00:09:03.426 }, 00:09:03.426 "vs": { 00:09:03.426 "nvme_version": "1.3" 00:09:03.426 }, 00:09:03.426 "ns_data": { 00:09:03.426 "id": 1, 00:09:03.426 "can_share": true 00:09:03.426 } 00:09:03.426 } 00:09:03.426 ], 00:09:03.426 "mp_policy": "active_passive" 00:09:03.426 } 00:09:03.426 } 00:09:03.426 ] 00:09:03.426 12:33:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66183 00:09:03.426 12:33:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.426 12:33:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:03.426 Running I/O for 10 seconds... 00:09:04.414 Latency(us) 00:09:04.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.414 Nvme0n1 : 1.00 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:09:04.414 =================================================================================================================== 00:09:04.414 Total : 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:09:04.414 00:09:05.349 12:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:05.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.349 Nvme0n1 : 2.00 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:09:05.349 =================================================================================================================== 00:09:05.349 Total : 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:09:05.349 00:09:05.607 true 00:09:05.607 12:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:05.607 12:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:05.866 12:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:05.866 12:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:05.866 12:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66183 00:09:06.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.433 Nvme0n1 : 3.00 8212.67 32.08 0.00 0.00 0.00 0.00 0.00 00:09:06.433 =================================================================================================================== 00:09:06.433 Total : 8212.67 32.08 0.00 0.00 0.00 0.00 0.00 00:09:06.433 00:09:07.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.382 Nvme0n1 : 4.00 8191.50 32.00 0.00 0.00 0.00 0.00 0.00 00:09:07.382 =================================================================================================================== 00:09:07.382 Total : 8191.50 32.00 0.00 0.00 0.00 0.00 0.00 00:09:07.382 00:09:08.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.756 Nvme0n1 : 5.00 8204.20 32.05 0.00 0.00 0.00 0.00 0.00 00:09:08.756 =================================================================================================================== 00:09:08.756 Total : 8204.20 32.05 0.00 0.00 0.00 0.00 0.00 00:09:08.756 00:09:09.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.693 Nvme0n1 : 6.00 7979.83 31.17 0.00 0.00 0.00 0.00 0.00 00:09:09.693 =================================================================================================================== 00:09:09.693 Total : 7979.83 31.17 0.00 0.00 0.00 0.00 0.00 00:09:09.693 00:09:10.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.630 Nvme0n1 : 7.00 7674.57 29.98 0.00 0.00 0.00 0.00 0.00 00:09:10.630 =================================================================================================================== 00:09:10.630 Total : 7674.57 29.98 0.00 0.00 0.00 0.00 0.00 00:09:10.630 00:09:11.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.565 Nvme0n1 : 8.00 7540.75 29.46 0.00 0.00 0.00 0.00 0.00 00:09:11.565 =================================================================================================================== 00:09:11.565 Total : 7540.75 29.46 0.00 0.00 0.00 0.00 0.00 00:09:11.565 00:09:12.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.502 Nvme0n1 : 9.00 7422.56 28.99 0.00 0.00 0.00 0.00 0.00 00:09:12.502 =================================================================================================================== 00:09:12.502 Total : 7422.56 28.99 0.00 0.00 0.00 0.00 0.00 00:09:12.502 00:09:13.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.440 Nvme0n1 : 10.00 7340.70 28.67 0.00 0.00 0.00 0.00 0.00 00:09:13.440 =================================================================================================================== 00:09:13.440 Total : 7340.70 28.67 0.00 0.00 0.00 0.00 0.00 00:09:13.440 00:09:13.440 00:09:13.440 Latency(us) 00:09:13.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.440 Nvme0n1 : 10.02 7341.02 28.68 0.00 0.00 17430.95 12809.31 156333.15 00:09:13.440 =================================================================================================================== 00:09:13.440 Total : 7341.02 28.68 0.00 0.00 17430.95 12809.31 156333.15 00:09:13.440 0 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66159 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66159 ']' 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66159 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66159 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66159' 00:09:13.441 killing process with pid 66159 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66159 00:09:13.441 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.441 00:09:13.441 Latency(us) 00:09:13.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.441 =================================================================================================================== 00:09:13.441 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.441 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66159 00:09:13.700 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.958 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:14.216 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:14.216 12:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65799 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65799 00:09:14.474 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65799 Killed "${NVMF_APP[@]}" "$@" 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66316 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66316 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66316 ']' 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.474 12:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:14.474 [2024-07-15 12:33:47.116499] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:14.474 [2024-07-15 12:33:47.116570] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.733 [2024-07-15 12:33:47.252496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.733 [2024-07-15 12:33:47.337189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.733 [2024-07-15 12:33:47.337253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.733 [2024-07-15 12:33:47.337263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.733 [2024-07-15 12:33:47.337271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.733 [2024-07-15 12:33:47.337278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.733 [2024-07-15 12:33:47.337300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.733 [2024-07-15 12:33:47.388313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.668 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.668 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:15.669 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.669 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.669 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:15.669 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.669 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:15.669 [2024-07-15 12:33:48.323527] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:15.669 [2024-07-15 12:33:48.323845] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:15.669 [2024-07-15 12:33:48.324028] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:15.927 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:15.927 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6a45e477-ca74-4776-bed4-21f114f70c85 00:09:15.927 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6a45e477-ca74-4776-bed4-21f114f70c85 00:09:15.927 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:15.927 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:15.927 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:15.927 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:15.927 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:16.186 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a45e477-ca74-4776-bed4-21f114f70c85 -t 2000 00:09:16.186 [ 00:09:16.186 { 00:09:16.186 "name": "6a45e477-ca74-4776-bed4-21f114f70c85", 00:09:16.186 "aliases": [ 00:09:16.186 "lvs/lvol" 00:09:16.186 ], 00:09:16.186 "product_name": "Logical Volume", 00:09:16.186 "block_size": 4096, 00:09:16.186 "num_blocks": 38912, 00:09:16.186 "uuid": "6a45e477-ca74-4776-bed4-21f114f70c85", 00:09:16.186 "assigned_rate_limits": { 00:09:16.186 "rw_ios_per_sec": 0, 00:09:16.186 "rw_mbytes_per_sec": 0, 00:09:16.186 "r_mbytes_per_sec": 0, 00:09:16.186 "w_mbytes_per_sec": 0 00:09:16.186 }, 00:09:16.186 "claimed": false, 00:09:16.186 "zoned": false, 00:09:16.186 "supported_io_types": { 00:09:16.186 "read": true, 00:09:16.186 "write": true, 00:09:16.186 "unmap": true, 00:09:16.186 "flush": false, 00:09:16.186 "reset": true, 00:09:16.186 "nvme_admin": false, 00:09:16.186 "nvme_io": false, 00:09:16.186 "nvme_io_md": false, 00:09:16.186 "write_zeroes": true, 00:09:16.186 "zcopy": false, 00:09:16.186 "get_zone_info": false, 00:09:16.186 "zone_management": false, 00:09:16.186 "zone_append": false, 00:09:16.186 "compare": false, 00:09:16.186 "compare_and_write": false, 00:09:16.186 "abort": false, 00:09:16.186 "seek_hole": true, 00:09:16.186 "seek_data": true, 00:09:16.186 "copy": false, 00:09:16.186 "nvme_iov_md": false 00:09:16.186 }, 00:09:16.186 "driver_specific": { 00:09:16.186 "lvol": { 00:09:16.186 "lvol_store_uuid": "b8846638-89bc-4edd-81f9-45fd8b6a9727", 00:09:16.186 "base_bdev": "aio_bdev", 00:09:16.186 "thin_provision": false, 00:09:16.186 "num_allocated_clusters": 38, 00:09:16.187 "snapshot": false, 00:09:16.187 "clone": false, 00:09:16.187 "esnap_clone": false 00:09:16.187 } 00:09:16.187 } 00:09:16.187 } 00:09:16.187 ] 00:09:16.187 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:16.187 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:16.187 12:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:16.445 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:16.445 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:16.445 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:16.704 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:16.704 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.963 [2024-07-15 12:33:49.540956] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:16.963 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:17.222 request: 00:09:17.222 { 00:09:17.222 "uuid": "b8846638-89bc-4edd-81f9-45fd8b6a9727", 00:09:17.222 "method": "bdev_lvol_get_lvstores", 00:09:17.222 "req_id": 1 00:09:17.222 } 00:09:17.222 Got JSON-RPC error response 00:09:17.222 response: 00:09:17.222 { 00:09:17.222 "code": -19, 00:09:17.222 "message": "No such device" 00:09:17.222 } 00:09:17.222 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:17.222 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:17.222 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:17.222 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:17.222 12:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.481 aio_bdev 00:09:17.481 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6a45e477-ca74-4776-bed4-21f114f70c85 00:09:17.481 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6a45e477-ca74-4776-bed4-21f114f70c85 00:09:17.481 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:17.481 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:17.481 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:17.481 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:17.481 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:17.739 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a45e477-ca74-4776-bed4-21f114f70c85 -t 2000 00:09:17.998 [ 00:09:17.998 { 00:09:17.998 "name": "6a45e477-ca74-4776-bed4-21f114f70c85", 00:09:17.998 "aliases": [ 00:09:17.998 "lvs/lvol" 00:09:17.998 ], 00:09:17.998 "product_name": "Logical Volume", 00:09:17.998 "block_size": 4096, 00:09:17.998 "num_blocks": 38912, 00:09:17.998 "uuid": "6a45e477-ca74-4776-bed4-21f114f70c85", 00:09:17.998 "assigned_rate_limits": { 00:09:17.998 "rw_ios_per_sec": 0, 00:09:17.998 "rw_mbytes_per_sec": 0, 00:09:17.998 "r_mbytes_per_sec": 0, 00:09:17.998 "w_mbytes_per_sec": 0 00:09:17.998 }, 00:09:17.998 "claimed": false, 00:09:17.998 "zoned": false, 00:09:17.998 "supported_io_types": { 00:09:17.998 "read": true, 00:09:17.998 "write": true, 00:09:17.998 "unmap": true, 00:09:17.998 "flush": false, 00:09:17.998 "reset": true, 00:09:17.998 "nvme_admin": false, 00:09:17.998 "nvme_io": false, 00:09:17.998 "nvme_io_md": false, 00:09:17.998 "write_zeroes": true, 00:09:17.998 "zcopy": false, 00:09:17.998 "get_zone_info": false, 00:09:17.998 "zone_management": false, 00:09:17.998 "zone_append": false, 00:09:17.998 "compare": false, 00:09:17.998 "compare_and_write": false, 00:09:17.998 "abort": false, 00:09:17.998 "seek_hole": true, 00:09:17.998 "seek_data": true, 00:09:17.998 "copy": false, 00:09:17.999 "nvme_iov_md": false 00:09:17.999 }, 00:09:17.999 "driver_specific": { 00:09:17.999 "lvol": { 00:09:17.999 "lvol_store_uuid": "b8846638-89bc-4edd-81f9-45fd8b6a9727", 00:09:17.999 "base_bdev": "aio_bdev", 00:09:17.999 "thin_provision": false, 00:09:17.999 "num_allocated_clusters": 38, 00:09:17.999 "snapshot": false, 00:09:17.999 "clone": false, 00:09:17.999 "esnap_clone": false 00:09:17.999 } 00:09:17.999 } 00:09:17.999 } 00:09:17.999 ] 00:09:17.999 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:17.999 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:17.999 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:18.257 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:18.257 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:18.257 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:18.257 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:18.257 12:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6a45e477-ca74-4776-bed4-21f114f70c85 00:09:18.516 12:33:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8846638-89bc-4edd-81f9-45fd8b6a9727 00:09:18.775 12:33:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.033 12:33:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.292 00:09:19.292 real 0m20.334s 00:09:19.292 user 0m42.168s 00:09:19.292 sys 0m9.380s 00:09:19.292 12:33:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.292 ************************************ 00:09:19.292 END TEST lvs_grow_dirty 00:09:19.292 ************************************ 00:09:19.292 12:33:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:19.551 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:19.551 12:33:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:19.551 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:19.551 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:19.551 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:19.551 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:19.551 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:19.551 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:19.551 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:19.552 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:19.552 nvmf_trace.0 00:09:19.552 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:19.552 12:33:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:19.552 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.552 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:19.552 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.552 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:19.552 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.552 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.552 rmmod nvme_tcp 00:09:19.820 rmmod nvme_fabrics 00:09:19.820 rmmod nvme_keyring 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66316 ']' 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66316 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66316 ']' 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66316 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66316 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.820 killing process with pid 66316 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66316' 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66316 00:09:19.820 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66316 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:20.094 00:09:20.094 real 0m41.350s 00:09:20.094 user 1m5.531s 00:09:20.094 sys 0m12.818s 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.094 12:33:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.094 ************************************ 00:09:20.094 END TEST nvmf_lvs_grow 00:09:20.094 ************************************ 00:09:20.094 12:33:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.094 12:33:52 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:20.094 12:33:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.094 12:33:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.094 12:33:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.094 ************************************ 00:09:20.094 START TEST nvmf_bdev_io_wait 00:09:20.094 ************************************ 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:20.094 * Looking for test storage... 00:09:20.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.094 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:20.095 Cannot find device "nvmf_tgt_br" 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:20.095 Cannot find device "nvmf_tgt_br2" 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:20.095 Cannot find device "nvmf_tgt_br" 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:20.095 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:20.353 Cannot find device "nvmf_tgt_br2" 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:20.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:20.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:20.353 12:33:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:20.353 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.353 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.353 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.353 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:20.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:20.613 00:09:20.613 --- 10.0.0.2 ping statistics --- 00:09:20.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.613 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:20.613 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.613 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:09:20.613 00:09:20.613 --- 10.0.0.3 ping statistics --- 00:09:20.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.613 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:09:20.613 00:09:20.613 --- 10.0.0.1 ping statistics --- 00:09:20.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.613 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66624 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66624 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66624 ']' 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.613 12:33:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.613 [2024-07-15 12:33:53.131073] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:20.613 [2024-07-15 12:33:53.131185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.613 [2024-07-15 12:33:53.272368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.870 [2024-07-15 12:33:53.366329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.870 [2024-07-15 12:33:53.366396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.870 [2024-07-15 12:33:53.366423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.870 [2024-07-15 12:33:53.366430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.870 [2024-07-15 12:33:53.366437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.870 [2024-07-15 12:33:53.366859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.870 [2024-07-15 12:33:53.367087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.870 [2024-07-15 12:33:53.367198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.870 [2024-07-15 12:33:53.367199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.436 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.436 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:21.436 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.436 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.436 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.694 [2024-07-15 12:33:54.193614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.694 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.695 [2024-07-15 12:33:54.205891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.695 Malloc0 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.695 [2024-07-15 12:33:54.272584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66664 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:21.695 { 00:09:21.695 "params": { 00:09:21.695 "name": "Nvme$subsystem", 00:09:21.695 "trtype": "$TEST_TRANSPORT", 00:09:21.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.695 "adrfam": "ipv4", 00:09:21.695 "trsvcid": "$NVMF_PORT", 00:09:21.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.695 "hdgst": ${hdgst:-false}, 00:09:21.695 "ddgst": ${ddgst:-false} 00:09:21.695 }, 00:09:21.695 "method": "bdev_nvme_attach_controller" 00:09:21.695 } 00:09:21.695 EOF 00:09:21.695 )") 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66667 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66670 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:21.695 { 00:09:21.695 "params": { 00:09:21.695 "name": "Nvme$subsystem", 00:09:21.695 "trtype": "$TEST_TRANSPORT", 00:09:21.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.695 "adrfam": "ipv4", 00:09:21.695 "trsvcid": "$NVMF_PORT", 00:09:21.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.695 "hdgst": ${hdgst:-false}, 00:09:21.695 "ddgst": ${ddgst:-false} 00:09:21.695 }, 00:09:21.695 "method": "bdev_nvme_attach_controller" 00:09:21.695 } 00:09:21.695 EOF 00:09:21.695 )") 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66672 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:21.695 { 00:09:21.695 "params": { 00:09:21.695 "name": "Nvme$subsystem", 00:09:21.695 "trtype": "$TEST_TRANSPORT", 00:09:21.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.695 "adrfam": "ipv4", 00:09:21.695 "trsvcid": "$NVMF_PORT", 00:09:21.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.695 "hdgst": ${hdgst:-false}, 00:09:21.695 "ddgst": ${ddgst:-false} 00:09:21.695 }, 00:09:21.695 "method": "bdev_nvme_attach_controller" 00:09:21.695 } 00:09:21.695 EOF 00:09:21.695 )") 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:21.695 { 00:09:21.695 "params": { 00:09:21.695 "name": "Nvme$subsystem", 00:09:21.695 "trtype": "$TEST_TRANSPORT", 00:09:21.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.695 "adrfam": "ipv4", 00:09:21.695 "trsvcid": "$NVMF_PORT", 00:09:21.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.695 "hdgst": ${hdgst:-false}, 00:09:21.695 "ddgst": ${ddgst:-false} 00:09:21.695 }, 00:09:21.695 "method": "bdev_nvme_attach_controller" 00:09:21.695 } 00:09:21.695 EOF 00:09:21.695 )") 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:21.695 "params": { 00:09:21.695 "name": "Nvme1", 00:09:21.695 "trtype": "tcp", 00:09:21.695 "traddr": "10.0.0.2", 00:09:21.695 "adrfam": "ipv4", 00:09:21.695 "trsvcid": "4420", 00:09:21.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.695 "hdgst": false, 00:09:21.695 "ddgst": false 00:09:21.695 }, 00:09:21.695 "method": "bdev_nvme_attach_controller" 00:09:21.695 }' 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:21.695 "params": { 00:09:21.695 "name": "Nvme1", 00:09:21.695 "trtype": "tcp", 00:09:21.695 "traddr": "10.0.0.2", 00:09:21.695 "adrfam": "ipv4", 00:09:21.695 "trsvcid": "4420", 00:09:21.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.695 "hdgst": false, 00:09:21.695 "ddgst": false 00:09:21.695 }, 00:09:21.695 "method": "bdev_nvme_attach_controller" 00:09:21.695 }' 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:21.695 "params": { 00:09:21.695 "name": "Nvme1", 00:09:21.695 "trtype": "tcp", 00:09:21.695 "traddr": "10.0.0.2", 00:09:21.695 "adrfam": "ipv4", 00:09:21.695 "trsvcid": "4420", 00:09:21.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.695 "hdgst": false, 00:09:21.695 "ddgst": false 00:09:21.695 }, 00:09:21.695 "method": "bdev_nvme_attach_controller" 00:09:21.695 }' 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:21.695 "params": { 00:09:21.695 "name": "Nvme1", 00:09:21.695 "trtype": "tcp", 00:09:21.695 "traddr": "10.0.0.2", 00:09:21.695 "adrfam": "ipv4", 00:09:21.695 "trsvcid": "4420", 00:09:21.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.695 "hdgst": false, 00:09:21.695 "ddgst": false 00:09:21.695 }, 00:09:21.695 "method": "bdev_nvme_attach_controller" 00:09:21.695 }' 00:09:21.695 12:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66664 00:09:21.695 [2024-07-15 12:33:54.328940] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:21.695 [2024-07-15 12:33:54.329014] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:21.695 [2024-07-15 12:33:54.336021] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:21.696 [2024-07-15 12:33:54.336092] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:21.696 [2024-07-15 12:33:54.362811] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:21.696 [2024-07-15 12:33:54.363146] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:21.696 [2024-07-15 12:33:54.367186] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:21.696 [2024-07-15 12:33:54.367260] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:21.954 [2024-07-15 12:33:54.529062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.954 [2024-07-15 12:33:54.605044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.954 [2024-07-15 12:33:54.626098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:22.214 [2024-07-15 12:33:54.681357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.214 [2024-07-15 12:33:54.682467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.214 [2024-07-15 12:33:54.702378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:22.214 [2024-07-15 12:33:54.750091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.214 [2024-07-15 12:33:54.762077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.214 [2024-07-15 12:33:54.780041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:22.214 Running I/O for 1 seconds... 00:09:22.214 [2024-07-15 12:33:54.826996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.214 Running I/O for 1 seconds... 00:09:22.214 [2024-07-15 12:33:54.858067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:22.472 [2024-07-15 12:33:54.906157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.472 Running I/O for 1 seconds... 00:09:22.472 Running I/O for 1 seconds... 00:09:23.410 00:09:23.410 Latency(us) 00:09:23.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.410 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:23.410 Nvme1n1 : 1.02 7036.44 27.49 0.00 0.00 18061.79 9770.82 32887.16 00:09:23.410 =================================================================================================================== 00:09:23.410 Total : 7036.44 27.49 0.00 0.00 18061.79 9770.82 32887.16 00:09:23.410 00:09:23.410 Latency(us) 00:09:23.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.410 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:23.410 Nvme1n1 : 1.00 175685.42 686.27 0.00 0.00 725.94 331.40 1340.51 00:09:23.410 =================================================================================================================== 00:09:23.410 Total : 175685.42 686.27 0.00 0.00 725.94 331.40 1340.51 00:09:23.410 00:09:23.410 Latency(us) 00:09:23.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.410 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:23.410 Nvme1n1 : 1.01 8623.03 33.68 0.00 0.00 14768.83 9055.88 28359.21 00:09:23.410 =================================================================================================================== 00:09:23.410 Total : 8623.03 33.68 0.00 0.00 14768.83 9055.88 28359.21 00:09:23.410 00:09:23.410 Latency(us) 00:09:23.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.410 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:23.410 Nvme1n1 : 1.00 7218.78 28.20 0.00 0.00 17678.98 5034.36 46709.29 00:09:23.410 =================================================================================================================== 00:09:23.410 Total : 7218.78 28.20 0.00 0.00 17678.98 5034.36 46709.29 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66667 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66670 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66672 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.669 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.669 rmmod nvme_tcp 00:09:23.669 rmmod nvme_fabrics 00:09:23.928 rmmod nvme_keyring 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66624 ']' 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66624 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66624 ']' 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66624 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66624 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:23.928 killing process with pid 66624 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66624' 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66624 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66624 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.928 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.188 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:24.188 00:09:24.188 real 0m4.025s 00:09:24.188 user 0m17.702s 00:09:24.188 sys 0m2.206s 00:09:24.188 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.188 12:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.188 ************************************ 00:09:24.188 END TEST nvmf_bdev_io_wait 00:09:24.188 ************************************ 00:09:24.188 12:33:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:24.188 12:33:56 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:24.188 12:33:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:24.188 12:33:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.188 12:33:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.188 ************************************ 00:09:24.188 START TEST nvmf_queue_depth 00:09:24.188 ************************************ 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:24.188 * Looking for test storage... 00:09:24.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:24.188 12:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:24.189 Cannot find device "nvmf_tgt_br" 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.189 Cannot find device "nvmf_tgt_br2" 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:24.189 Cannot find device "nvmf_tgt_br" 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:24.189 Cannot find device "nvmf_tgt_br2" 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:24.189 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:24.447 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:24.447 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.447 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:24.447 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:24.448 12:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:24.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:09:24.448 00:09:24.448 --- 10.0.0.2 ping statistics --- 00:09:24.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.448 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:24.448 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:24.448 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:09:24.448 00:09:24.448 --- 10.0.0.3 ping statistics --- 00:09:24.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.448 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:24.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:24.448 00:09:24.448 --- 10.0.0.1 ping statistics --- 00:09:24.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.448 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.448 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66899 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66899 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66899 ']' 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.707 12:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.707 [2024-07-15 12:33:57.198526] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:24.707 [2024-07-15 12:33:57.198621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.707 [2024-07-15 12:33:57.343020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.966 [2024-07-15 12:33:57.441128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.966 [2024-07-15 12:33:57.441208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.966 [2024-07-15 12:33:57.441219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.966 [2024-07-15 12:33:57.441227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.966 [2024-07-15 12:33:57.441233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.966 [2024-07-15 12:33:57.441260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.966 [2024-07-15 12:33:57.494011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.535 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.535 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:25.535 12:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.535 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.535 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.795 [2024-07-15 12:33:58.256782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.795 Malloc0 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.795 [2024-07-15 12:33:58.319174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66931 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66931 /var/tmp/bdevperf.sock 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66931 ']' 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.795 12:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.795 [2024-07-15 12:33:58.382538] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:25.796 [2024-07-15 12:33:58.382638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66931 ] 00:09:26.054 [2024-07-15 12:33:58.523993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.054 [2024-07-15 12:33:58.640581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.054 [2024-07-15 12:33:58.697654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:26.991 12:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.991 12:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:26.991 12:33:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:26.991 12:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.992 12:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.992 NVMe0n1 00:09:26.992 12:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.992 12:33:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:26.992 Running I/O for 10 seconds... 00:09:36.993 00:09:36.993 Latency(us) 00:09:36.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.993 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:36.993 Verification LBA range: start 0x0 length 0x4000 00:09:36.993 NVMe0n1 : 10.07 8482.56 33.13 0.00 0.00 120178.90 12273.11 85315.96 00:09:36.993 =================================================================================================================== 00:09:36.993 Total : 8482.56 33.13 0.00 0.00 120178.90 12273.11 85315.96 00:09:36.993 0 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66931 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66931 ']' 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66931 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66931 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.993 killing process with pid 66931 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66931' 00:09:36.993 Received shutdown signal, test time was about 10.000000 seconds 00:09:36.993 00:09:36.993 Latency(us) 00:09:36.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.993 =================================================================================================================== 00:09:36.993 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66931 00:09:36.993 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66931 00:09:37.252 12:34:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:37.252 12:34:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:37.252 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.252 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:37.252 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.252 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:37.252 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.252 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.252 rmmod nvme_tcp 00:09:37.252 rmmod nvme_fabrics 00:09:37.252 rmmod nvme_keyring 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66899 ']' 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66899 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66899 ']' 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66899 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66899 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:37.511 killing process with pid 66899 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66899' 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66899 00:09:37.511 12:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66899 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:37.771 00:09:37.771 real 0m13.578s 00:09:37.771 user 0m23.395s 00:09:37.771 sys 0m2.255s 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.771 12:34:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:37.771 ************************************ 00:09:37.771 END TEST nvmf_queue_depth 00:09:37.771 ************************************ 00:09:37.771 12:34:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:37.771 12:34:10 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:37.771 12:34:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:37.771 12:34:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.771 12:34:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.771 ************************************ 00:09:37.771 START TEST nvmf_target_multipath 00:09:37.771 ************************************ 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:37.771 * Looking for test storage... 00:09:37.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:37.771 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:38.030 Cannot find device "nvmf_tgt_br" 00:09:38.030 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:38.030 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.031 Cannot find device "nvmf_tgt_br2" 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:38.031 Cannot find device "nvmf_tgt_br" 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:38.031 Cannot find device "nvmf_tgt_br2" 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.031 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:38.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:38.290 00:09:38.290 --- 10.0.0.2 ping statistics --- 00:09:38.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.290 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:38.290 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.290 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:09:38.290 00:09:38.290 --- 10.0.0.3 ping statistics --- 00:09:38.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.290 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:38.290 00:09:38.290 --- 10.0.0.1 ping statistics --- 00:09:38.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.290 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.290 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67257 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67257 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67257 ']' 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.291 12:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:38.291 [2024-07-15 12:34:10.820068] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:38.291 [2024-07-15 12:34:10.820180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.291 [2024-07-15 12:34:10.958230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.549 [2024-07-15 12:34:11.084911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.549 [2024-07-15 12:34:11.084977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.549 [2024-07-15 12:34:11.084989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.549 [2024-07-15 12:34:11.084998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.549 [2024-07-15 12:34:11.085006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.549 [2024-07-15 12:34:11.085137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.549 [2024-07-15 12:34:11.085904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.549 [2024-07-15 12:34:11.086050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.549 [2024-07-15 12:34:11.086132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.549 [2024-07-15 12:34:11.142168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.486 12:34:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.486 12:34:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:39.486 12:34:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.486 12:34:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:39.486 12:34:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:39.486 12:34:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.486 12:34:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:39.486 [2024-07-15 12:34:12.054402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.486 12:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:39.743 Malloc0 00:09:39.743 12:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:40.002 12:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.262 12:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.832 [2024-07-15 12:34:13.210851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.832 12:34:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:41.089 [2024-07-15 12:34:13.535340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:41.089 12:34:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:41.089 12:34:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:41.347 12:34:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.347 12:34:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:41.347 12:34:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.347 12:34:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:41.347 12:34:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67352 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:43.247 12:34:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:43.247 [global] 00:09:43.247 thread=1 00:09:43.247 invalidate=1 00:09:43.247 rw=randrw 00:09:43.247 time_based=1 00:09:43.247 runtime=6 00:09:43.247 ioengine=libaio 00:09:43.247 direct=1 00:09:43.247 bs=4096 00:09:43.247 iodepth=128 00:09:43.247 norandommap=0 00:09:43.247 numjobs=1 00:09:43.247 00:09:43.247 verify_dump=1 00:09:43.247 verify_backlog=512 00:09:43.247 verify_state_save=0 00:09:43.247 do_verify=1 00:09:43.247 verify=crc32c-intel 00:09:43.247 [job0] 00:09:43.247 filename=/dev/nvme0n1 00:09:43.247 Could not set queue depth (nvme0n1) 00:09:43.506 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.506 fio-3.35 00:09:43.506 Starting 1 thread 00:09:44.443 12:34:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:44.701 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:44.960 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:45.219 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:45.476 12:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67352 00:09:49.660 00:09:49.660 job0: (groupid=0, jobs=1): err= 0: pid=67373: Mon Jul 15 12:34:22 2024 00:09:49.660 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(246MiB/6009msec) 00:09:49.660 slat (usec): min=6, max=5647, avg=56.14, stdev=210.87 00:09:49.660 clat (usec): min=1561, max=17968, avg=8276.98, stdev=1447.24 00:09:49.660 lat (usec): min=1574, max=17994, avg=8333.12, stdev=1451.62 00:09:49.660 clat percentiles (usec): 00:09:49.660 | 1.00th=[ 4293], 5.00th=[ 6456], 10.00th=[ 7111], 20.00th=[ 7570], 00:09:49.660 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8291], 00:09:49.660 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11731], 00:09:49.660 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14484], 99.95th=[15270], 00:09:49.660 | 99.99th=[16909] 00:09:49.660 bw ( KiB/s): min= 9864, max=27592, per=53.17%, avg=22255.33, stdev=5279.95, samples=12 00:09:49.660 iops : min= 2466, max= 6898, avg=5563.83, stdev=1319.99, samples=12 00:09:49.660 write: IOPS=6268, BW=24.5MiB/s (25.7MB/s)(131MiB/5341msec); 0 zone resets 00:09:49.660 slat (usec): min=17, max=2384, avg=64.16, stdev=147.17 00:09:49.660 clat (usec): min=682, max=16812, avg=7220.88, stdev=1331.22 00:09:49.660 lat (usec): min=731, max=16838, avg=7285.04, stdev=1335.28 00:09:49.660 clat percentiles (usec): 00:09:49.660 | 1.00th=[ 3359], 5.00th=[ 4359], 10.00th=[ 5735], 20.00th=[ 6718], 00:09:49.660 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:09:49.660 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8291], 95.00th=[ 8717], 00:09:49.660 | 99.00th=[11469], 99.50th=[12387], 99.90th=[14877], 99.95th=[15270], 00:09:49.660 | 99.99th=[16057] 00:09:49.660 bw ( KiB/s): min=10312, max=27024, per=88.86%, avg=22279.33, stdev=4994.73, samples=12 00:09:49.660 iops : min= 2578, max= 6756, avg=5569.83, stdev=1248.68, samples=12 00:09:49.660 lat (usec) : 750=0.01% 00:09:49.660 lat (msec) : 2=0.02%, 4=1.59%, 10=92.45%, 20=5.93% 00:09:49.660 cpu : usr=5.24%, sys=24.50%, ctx=5779, majf=0, minf=96 00:09:49.660 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:49.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.660 issued rwts: total=62881,33479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.660 00:09:49.660 Run status group 0 (all jobs): 00:09:49.660 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=246MiB (258MB), run=6009-6009msec 00:09:49.660 WRITE: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=131MiB (137MB), run=5341-5341msec 00:09:49.660 00:09:49.660 Disk stats (read/write): 00:09:49.660 nvme0n1: ios=62078/32950, merge=0/0, ticks=490893/221778, in_queue=712671, util=98.65% 00:09:49.660 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:49.921 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67448 00:09:50.180 12:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:50.180 [global] 00:09:50.180 thread=1 00:09:50.180 invalidate=1 00:09:50.180 rw=randrw 00:09:50.180 time_based=1 00:09:50.180 runtime=6 00:09:50.180 ioengine=libaio 00:09:50.180 direct=1 00:09:50.180 bs=4096 00:09:50.180 iodepth=128 00:09:50.180 norandommap=0 00:09:50.180 numjobs=1 00:09:50.180 00:09:50.180 verify_dump=1 00:09:50.180 verify_backlog=512 00:09:50.180 verify_state_save=0 00:09:50.180 do_verify=1 00:09:50.180 verify=crc32c-intel 00:09:50.439 [job0] 00:09:50.439 filename=/dev/nvme0n1 00:09:50.439 Could not set queue depth (nvme0n1) 00:09:50.439 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:50.439 fio-3.35 00:09:50.439 Starting 1 thread 00:09:51.390 12:34:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:51.648 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:51.906 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:52.164 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:52.422 12:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67448 00:09:56.612 00:09:56.612 job0: (groupid=0, jobs=1): err= 0: pid=67480: Mon Jul 15 12:34:29 2024 00:09:56.612 read: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(241MiB/6006msec) 00:09:56.612 slat (usec): min=2, max=7955, avg=47.26, stdev=206.98 00:09:56.612 clat (usec): min=558, max=20462, avg=8415.41, stdev=2157.66 00:09:56.612 lat (usec): min=569, max=20477, avg=8462.67, stdev=2175.27 00:09:56.612 clat percentiles (usec): 00:09:56.612 | 1.00th=[ 3490], 5.00th=[ 4752], 10.00th=[ 5407], 20.00th=[ 6521], 00:09:56.612 | 30.00th=[ 7635], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:09:56.612 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[12125], 00:09:56.612 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15533], 99.95th=[16909], 00:09:56.612 | 99.99th=[19792] 00:09:56.612 bw ( KiB/s): min= 9152, max=34592, per=54.12%, avg=22246.45, stdev=7552.88, samples=11 00:09:56.612 iops : min= 2288, max= 8648, avg=5561.55, stdev=1888.32, samples=11 00:09:56.612 write: IOPS=6225, BW=24.3MiB/s (25.5MB/s)(132MiB/5425msec); 0 zone resets 00:09:56.612 slat (usec): min=3, max=7270, avg=60.20, stdev=155.59 00:09:56.612 clat (usec): min=1843, max=20295, avg=7196.45, stdev=1994.25 00:09:56.612 lat (usec): min=1875, max=20327, avg=7256.64, stdev=2011.85 00:09:56.612 clat percentiles (usec): 00:09:56.612 | 1.00th=[ 3032], 5.00th=[ 3752], 10.00th=[ 4293], 20.00th=[ 5014], 00:09:56.612 | 30.00th=[ 6063], 40.00th=[ 7373], 50.00th=[ 7832], 60.00th=[ 8160], 00:09:56.612 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9372], 00:09:56.612 | 99.00th=[12256], 99.50th=[13304], 99.90th=[18482], 99.95th=[19006], 00:09:56.612 | 99.99th=[19792] 00:09:56.612 bw ( KiB/s): min= 9464, max=34104, per=89.35%, avg=22248.27, stdev=7410.57, samples=11 00:09:56.612 iops : min= 2366, max= 8526, avg=5562.00, stdev=1852.74, samples=11 00:09:56.612 lat (usec) : 750=0.01%, 1000=0.02% 00:09:56.612 lat (msec) : 2=0.08%, 4=3.71%, 10=85.87%, 20=10.31%, 50=0.01% 00:09:56.612 cpu : usr=5.56%, sys=23.70%, ctx=5487, majf=0, minf=102 00:09:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.612 issued rwts: total=61720,33772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.612 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.612 00:09:56.612 Run status group 0 (all jobs): 00:09:56.612 READ: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=241MiB (253MB), run=6006-6006msec 00:09:56.612 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=132MiB (138MB), run=5425-5425msec 00:09:56.612 00:09:56.612 Disk stats (read/write): 00:09:56.612 nvme0n1: ios=61040/32972, merge=0/0, ticks=491144/221406, in_queue=712550, util=98.77% 00:09:56.612 12:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:56.612 12:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:56.612 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:56.612 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:56.612 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.612 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:56.612 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.612 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:56.612 12:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.871 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.871 rmmod nvme_tcp 00:09:57.131 rmmod nvme_fabrics 00:09:57.131 rmmod nvme_keyring 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67257 ']' 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67257 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67257 ']' 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67257 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67257 00:09:57.131 killing process with pid 67257 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67257' 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67257 00:09:57.131 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67257 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:57.390 00:09:57.390 real 0m19.605s 00:09:57.390 user 1m13.743s 00:09:57.390 sys 0m9.969s 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.390 12:34:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.390 ************************************ 00:09:57.390 END TEST nvmf_target_multipath 00:09:57.390 ************************************ 00:09:57.390 12:34:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:57.390 12:34:29 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:57.390 12:34:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:57.390 12:34:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.390 12:34:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.390 ************************************ 00:09:57.390 START TEST nvmf_zcopy 00:09:57.390 ************************************ 00:09:57.390 12:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:57.390 * Looking for test storage... 00:09:57.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.390 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.391 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.391 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:57.650 Cannot find device "nvmf_tgt_br" 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.650 Cannot find device "nvmf_tgt_br2" 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:57.650 Cannot find device "nvmf_tgt_br" 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:57.650 Cannot find device "nvmf_tgt_br2" 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:57.650 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:57.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:09:57.910 00:09:57.910 --- 10.0.0.2 ping statistics --- 00:09:57.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.910 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:57.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:09:57.910 00:09:57.910 --- 10.0.0.3 ping statistics --- 00:09:57.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.910 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:57.910 00:09:57.910 --- 10.0.0.1 ping statistics --- 00:09:57.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.910 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67722 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67722 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67722 ']' 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.910 12:34:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.910 [2024-07-15 12:34:30.511454] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:57.910 [2024-07-15 12:34:30.511541] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.170 [2024-07-15 12:34:30.650276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.170 [2024-07-15 12:34:30.769850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.170 [2024-07-15 12:34:30.769912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.170 [2024-07-15 12:34:30.769922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.170 [2024-07-15 12:34:30.769931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.170 [2024-07-15 12:34:30.769937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.170 [2024-07-15 12:34:30.769961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.170 [2024-07-15 12:34:30.826728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.107 [2024-07-15 12:34:31.584375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.107 [2024-07-15 12:34:31.600440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.107 malloc0 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.107 12:34:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:59.108 { 00:09:59.108 "params": { 00:09:59.108 "name": "Nvme$subsystem", 00:09:59.108 "trtype": "$TEST_TRANSPORT", 00:09:59.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.108 "adrfam": "ipv4", 00:09:59.108 "trsvcid": "$NVMF_PORT", 00:09:59.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.108 "hdgst": ${hdgst:-false}, 00:09:59.108 "ddgst": ${ddgst:-false} 00:09:59.108 }, 00:09:59.108 "method": "bdev_nvme_attach_controller" 00:09:59.108 } 00:09:59.108 EOF 00:09:59.108 )") 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:59.108 12:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:59.108 "params": { 00:09:59.108 "name": "Nvme1", 00:09:59.108 "trtype": "tcp", 00:09:59.108 "traddr": "10.0.0.2", 00:09:59.108 "adrfam": "ipv4", 00:09:59.108 "trsvcid": "4420", 00:09:59.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.108 "hdgst": false, 00:09:59.108 "ddgst": false 00:09:59.108 }, 00:09:59.108 "method": "bdev_nvme_attach_controller" 00:09:59.108 }' 00:09:59.108 [2024-07-15 12:34:31.688958] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:59.108 [2024-07-15 12:34:31.689031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67759 ] 00:09:59.366 [2024-07-15 12:34:31.826133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.367 [2024-07-15 12:34:31.976545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.626 [2024-07-15 12:34:32.062628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:59.626 Running I/O for 10 seconds... 00:10:09.598 00:10:09.598 Latency(us) 00:10:09.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.598 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:09.598 Verification LBA range: start 0x0 length 0x1000 00:10:09.598 Nvme1n1 : 10.02 5870.55 45.86 0.00 0.00 21736.41 2949.12 33363.78 00:10:09.598 =================================================================================================================== 00:10:09.598 Total : 5870.55 45.86 0.00 0.00 21736.41 2949.12 33363.78 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67877 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:09.856 { 00:10:09.856 "params": { 00:10:09.856 "name": "Nvme$subsystem", 00:10:09.856 "trtype": "$TEST_TRANSPORT", 00:10:09.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.856 "adrfam": "ipv4", 00:10:09.856 "trsvcid": "$NVMF_PORT", 00:10:09.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.856 "hdgst": ${hdgst:-false}, 00:10:09.856 "ddgst": ${ddgst:-false} 00:10:09.856 }, 00:10:09.856 "method": "bdev_nvme_attach_controller" 00:10:09.856 } 00:10:09.856 EOF 00:10:09.856 )") 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:09.856 [2024-07-15 12:34:42.459871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.856 [2024-07-15 12:34:42.459922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:09.856 12:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:09.856 "params": { 00:10:09.856 "name": "Nvme1", 00:10:09.856 "trtype": "tcp", 00:10:09.856 "traddr": "10.0.0.2", 00:10:09.856 "adrfam": "ipv4", 00:10:09.856 "trsvcid": "4420", 00:10:09.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.856 "hdgst": false, 00:10:09.856 "ddgst": false 00:10:09.856 }, 00:10:09.856 "method": "bdev_nvme_attach_controller" 00:10:09.856 }' 00:10:09.856 [2024-07-15 12:34:42.471799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.856 [2024-07-15 12:34:42.471827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.856 [2024-07-15 12:34:42.483814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.856 [2024-07-15 12:34:42.483859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.856 [2024-07-15 12:34:42.495802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.856 [2024-07-15 12:34:42.495828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.856 [2024-07-15 12:34:42.507803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.856 [2024-07-15 12:34:42.507830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.856 [2024-07-15 12:34:42.515920] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:09.856 [2024-07-15 12:34:42.516002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67877 ] 00:10:09.856 [2024-07-15 12:34:42.519842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.856 [2024-07-15 12:34:42.519875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.856 [2024-07-15 12:34:42.531841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.856 [2024-07-15 12:34:42.531892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.543883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.543911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.555852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.555893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.567901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.567926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.579879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.579903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.591881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.591906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.603903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.603927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.615894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.615918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.627907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.627935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.639912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.639944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.651904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.651933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.653886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.114 [2024-07-15 12:34:42.663957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.663993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.675936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.675968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.687916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.687945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.699916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.699942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.711919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.711946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.723935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.723966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.735940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.735970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.747945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.747971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.759966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.759990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.771970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.771995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.776397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.114 [2024-07-15 12:34:42.783972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.783996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.114 [2024-07-15 12:34:42.795992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.114 [2024-07-15 12:34:42.796026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.808002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.808036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.820006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.820040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.832010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.832045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.840613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:10.373 [2024-07-15 12:34:42.844008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.844036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.856015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.856051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.868012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.868044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.880001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.880028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.892069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.892102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.904078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.904108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.916082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.916112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.928083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.928112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.940114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.940144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.952139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.952171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 Running I/O for 5 seconds... 00:10:10.373 [2024-07-15 12:34:42.964145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.964173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.982377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.982412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:42.997880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:42.997910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:43.014331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:43.014379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:43.031920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:43.031952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-07-15 12:34:43.046679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-07-15 12:34:43.046714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.064110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.064147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.078827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.078860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.095096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.095150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.112080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.112116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.128091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.128129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.148007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.148041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.163715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.163775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.179811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.179840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.196013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.196053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.213183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.213225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.229942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.229987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.246796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.246845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.262886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.262933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.279750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.279811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.295984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.296016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.632 [2024-07-15 12:34:43.312106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.632 [2024-07-15 12:34:43.312142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.321765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.321805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.337494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.337530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.353938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.353985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.369944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.369990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.385712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.385777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.401775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.401809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.418636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.418675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.434470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.434519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.444090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.444122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.460163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.460196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.476326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.476358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.494268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.494311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.510085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.510134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.519496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.519528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.535603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.535641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.554075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.554125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.891 [2024-07-15 12:34:43.569660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.891 [2024-07-15 12:34:43.569694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.587314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.587349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.602437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.602470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.611951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.611982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.628050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.628083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.643680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.643712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.653252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.653300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.665446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.665480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.680377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.680411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.695814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.695855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.705651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.705705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.721768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.721828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.738194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.738227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.755881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.755913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.771196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.771230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.780879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.780911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.796608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.796641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.813063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.813100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.149 [2024-07-15 12:34:43.829785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.149 [2024-07-15 12:34:43.829816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.846354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.846387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.861994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.862042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.879478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.879513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.895534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.895569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.914031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.914087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.928565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.928600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.946468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.946506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.961514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.961547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.971413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.971447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:43.986887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:43.986936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:44.002094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:44.002128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:44.017382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:44.017416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:44.036581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:44.036615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:44.050246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:44.050296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:44.065580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:44.065613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.426 [2024-07-15 12:34:44.075160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.426 [2024-07-15 12:34:44.075195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.685 [2024-07-15 12:34:44.090988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.685 [2024-07-15 12:34:44.091022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.685 [2024-07-15 12:34:44.106621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.685 [2024-07-15 12:34:44.106656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.685 [2024-07-15 12:34:44.124928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.685 [2024-07-15 12:34:44.124961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.685 [2024-07-15 12:34:44.140437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.685 [2024-07-15 12:34:44.140470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.685 [2024-07-15 12:34:44.156466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.685 [2024-07-15 12:34:44.156499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.166504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.166537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.183275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.183312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.197619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.197651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.213016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.213053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.228632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.228665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.245797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.245864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.263208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.263241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.279026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.279057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.297425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.297460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.312637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.312668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.322826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.322858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.339370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.339402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.686 [2024-07-15 12:34:44.354770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.686 [2024-07-15 12:34:44.354806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.943 [2024-07-15 12:34:44.372385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.943 [2024-07-15 12:34:44.372418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.943 [2024-07-15 12:34:44.388342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.943 [2024-07-15 12:34:44.388377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.943 [2024-07-15 12:34:44.398367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.943 [2024-07-15 12:34:44.398400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.943 [2024-07-15 12:34:44.414622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.943 [2024-07-15 12:34:44.414658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.943 [2024-07-15 12:34:44.429836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.943 [2024-07-15 12:34:44.429887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.943 [2024-07-15 12:34:44.445635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.943 [2024-07-15 12:34:44.445670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.943 [2024-07-15 12:34:44.462628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.462662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.479185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.479220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.495817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.495861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.513178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.513212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.528903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.528935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.546426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.546462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.561881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.561914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.571871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.571906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.587650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.587720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.603826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.603901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.944 [2024-07-15 12:34:44.622965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.944 [2024-07-15 12:34:44.622997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.638012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.638046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.647081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.647113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.664154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.664190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.679753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.679802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.695260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.695311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.705321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.705358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.721090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.721124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.739329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.739363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.754369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.754406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.770366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.770401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.788585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.788626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.803951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.803987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.821770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.821856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.837915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.837951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.854523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.854558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.202 [2024-07-15 12:34:44.872671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.202 [2024-07-15 12:34:44.872713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:44.887265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:44.887303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:44.903328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:44.903365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:44.920220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:44.920284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:44.936739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:44.936796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:44.954670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:44.954717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:44.969732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:44.969805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:44.984860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:44.984893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:44.999875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:44.999908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:45.016004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:45.016037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:45.033725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:45.033783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:45.048314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:45.048347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:45.063903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:45.063936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:45.081932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:45.081969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:45.096748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:45.096781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:45.106168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:45.106199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:45.122230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:45.122263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.461 [2024-07-15 12:34:45.132434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.461 [2024-07-15 12:34:45.132469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.147617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.147650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.163893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.163926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.181567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.181616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.196245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.196280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.212210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.212261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.229183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.229232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.245870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.245915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.261942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.261989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.281366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.281413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.296326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.296374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.313913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.313946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.330374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.330429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.347307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.347354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.363024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.363054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.380608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.380645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.720 [2024-07-15 12:34:45.397317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.720 [2024-07-15 12:34:45.397365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.414153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.414186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.430262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.430297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.448143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.448179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.464164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.464230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.481349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.481403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.496545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.496581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.512587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.512621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.529244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.529298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.547026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.547072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.562301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.562358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.580418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.580451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.595280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.595310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.610874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.610905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.628482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.628529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.643440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.643486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.979 [2024-07-15 12:34:45.658459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.979 [2024-07-15 12:34:45.658506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.674226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.674273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.683553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.683600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.698935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.698974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.714053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.714099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.730002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.730048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.747547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.747580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.762981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.763013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.780903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.780936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.796109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.796141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.805800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.805836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.820851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.820908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.831432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.831478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.846650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.846698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.863593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.863626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.879676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.879723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.895840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.895899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.239 [2024-07-15 12:34:45.905408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.239 [2024-07-15 12:34:45.905454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:45.921700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:45.921744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:45.938056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:45.938090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:45.957139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:45.957177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:45.971964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:45.971999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:45.981580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:45.981630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:45.997831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:45.997864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.014226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.014259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.031258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.031310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.046252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.046299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.062121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.062152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.080495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.080543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.095700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.095777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.112219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.112255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.129182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.129214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.145570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.145604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.161484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.161536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.498 [2024-07-15 12:34:46.171219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.498 [2024-07-15 12:34:46.171254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.757 [2024-07-15 12:34:46.187334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.757 [2024-07-15 12:34:46.187371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.757 [2024-07-15 12:34:46.203960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.757 [2024-07-15 12:34:46.203997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.757 [2024-07-15 12:34:46.220080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.757 [2024-07-15 12:34:46.220118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.757 [2024-07-15 12:34:46.238845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.757 [2024-07-15 12:34:46.238882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.757 [2024-07-15 12:34:46.253612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.757 [2024-07-15 12:34:46.253651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.757 [2024-07-15 12:34:46.268796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.757 [2024-07-15 12:34:46.268835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.757 [2024-07-15 12:34:46.277895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.757 [2024-07-15 12:34:46.277932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.757 [2024-07-15 12:34:46.294775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.757 [2024-07-15 12:34:46.294841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.757 [2024-07-15 12:34:46.312283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.757 [2024-07-15 12:34:46.312337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.758 [2024-07-15 12:34:46.327292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.758 [2024-07-15 12:34:46.327344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.758 [2024-07-15 12:34:46.345167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.758 [2024-07-15 12:34:46.345205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.758 [2024-07-15 12:34:46.360470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.758 [2024-07-15 12:34:46.360508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.758 [2024-07-15 12:34:46.378804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.758 [2024-07-15 12:34:46.378843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.758 [2024-07-15 12:34:46.396442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.758 [2024-07-15 12:34:46.396560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.758 [2024-07-15 12:34:46.414287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.758 [2024-07-15 12:34:46.414340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.758 [2024-07-15 12:34:46.433546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.758 [2024-07-15 12:34:46.433596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.450603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.450667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.464484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.464550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.481186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.481245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.498161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.498210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.511651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.511700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.527788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.527842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.545941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.545989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.559612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.559671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.576087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.576137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.016 [2024-07-15 12:34:46.593434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.016 [2024-07-15 12:34:46.593482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.017 [2024-07-15 12:34:46.607176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.017 [2024-07-15 12:34:46.607235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.017 [2024-07-15 12:34:46.625236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.017 [2024-07-15 12:34:46.625285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.017 [2024-07-15 12:34:46.639902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.017 [2024-07-15 12:34:46.639950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.017 [2024-07-15 12:34:46.656038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.017 [2024-07-15 12:34:46.656091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.017 [2024-07-15 12:34:46.672122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.017 [2024-07-15 12:34:46.672184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.017 [2024-07-15 12:34:46.689343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.017 [2024-07-15 12:34:46.689395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.702780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.702828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.718588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.718635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.737212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.737259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.750531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.750579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.770742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.770788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.788022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.788071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.801067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.801115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.819901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.819949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.833965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.834012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.851686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.851752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.868352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.868401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.881781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.881830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.900585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.900636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.916002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.916045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.926795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.926834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.942483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.942522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.275 [2024-07-15 12:34:46.956896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.275 [2024-07-15 12:34:46.956944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:46.973463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:46.973502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:46.989238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:46.989281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:46.999077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:46.999116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:47.014411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:47.014452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:47.032805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:47.032845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:47.047760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:47.047817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:47.062727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:47.062789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:47.078035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:47.078080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:47.088847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:47.088885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:47.104870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:47.104909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:47.122012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:47.122053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.533 [2024-07-15 12:34:47.138737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.533 [2024-07-15 12:34:47.138803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.534 [2024-07-15 12:34:47.154251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.534 [2024-07-15 12:34:47.154290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.534 [2024-07-15 12:34:47.164830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.534 [2024-07-15 12:34:47.164881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.534 [2024-07-15 12:34:47.181069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.534 [2024-07-15 12:34:47.181108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.534 [2024-07-15 12:34:47.195150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.534 [2024-07-15 12:34:47.195192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.534 [2024-07-15 12:34:47.210499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.534 [2024-07-15 12:34:47.210550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.220233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.220288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.235796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.235837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.252619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.252663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.270496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.270541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.286328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.286368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.303724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.303789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.319328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.319368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.338433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.338475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.353709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.353764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.364601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.364644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.380216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.380273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.394953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.395008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.409453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.409495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.425960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.426001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.442590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.442630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.792 [2024-07-15 12:34:47.459260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.792 [2024-07-15 12:34:47.459298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.050 [2024-07-15 12:34:47.476911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.476949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.491329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.491368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.507421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.507465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.524372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.524411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.541371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.541409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.556969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.557008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.567617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.567683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.583838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.583885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.598587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.598626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.609456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.609495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.625564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.625603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.641017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.641056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.659795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.659832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.673477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.673519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.688343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.688383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.703615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.703654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.051 [2024-07-15 12:34:47.719997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.051 [2024-07-15 12:34:47.720038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.736938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.736975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.753262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.753300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.771066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.771105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.785824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.785860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.801181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.801220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.816467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.816507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.827001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.827042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.842554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.842593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.857064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.857102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.873174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.873213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.890456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.890497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.907138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.907177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.925005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.925060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.940877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.940916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.951978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.952018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.967395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.967435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 00:10:15.310 Latency(us) 00:10:15.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.310 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:15.310 Nvme1n1 : 5.01 10865.96 84.89 0.00 0.00 11763.65 4855.62 25737.77 00:10:15.310 =================================================================================================================== 00:10:15.310 Total : 10865.96 84.89 0.00 0.00 11763.65 4855.62 25737.77 00:10:15.310 [2024-07-15 12:34:47.978505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.978544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.310 [2024-07-15 12:34:47.990509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.310 [2024-07-15 12:34:47.990570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.002495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.002531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.014494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.014544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.026542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.026587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.038526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.038577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.050514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.050552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.062511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.062547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.074526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.074561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.086511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.086546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.098511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.098546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.110524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.110557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.122536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.122570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.134531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.134566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.146530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.146565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.158532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.158565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.170555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.170588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.182571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.182604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.194565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.194599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.206569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.206617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 [2024-07-15 12:34:48.218567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.569 [2024-07-15 12:34:48.218600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.569 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67877) - No such process 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67877 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.569 delay0 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.569 12:34:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.828 12:34:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.828 12:34:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:15.828 [2024-07-15 12:34:48.427256] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:22.385 Initializing NVMe Controllers 00:10:22.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:22.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:22.385 Initialization complete. Launching workers. 00:10:22.385 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 100 00:10:22.385 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 387, failed to submit 33 00:10:22.385 success 264, unsuccess 123, failed 0 00:10:22.385 12:34:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:22.385 12:34:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:22.385 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:22.385 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:22.385 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:22.385 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:22.385 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:22.385 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:22.385 rmmod nvme_tcp 00:10:22.385 rmmod nvme_fabrics 00:10:22.385 rmmod nvme_keyring 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67722 ']' 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67722 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67722 ']' 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67722 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67722 00:10:22.386 killing process with pid 67722 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67722' 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67722 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67722 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:22.386 00:10:22.386 real 0m25.014s 00:10:22.386 user 0m39.837s 00:10:22.386 sys 0m7.843s 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.386 ************************************ 00:10:22.386 END TEST nvmf_zcopy 00:10:22.386 ************************************ 00:10:22.386 12:34:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.386 12:34:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:22.386 12:34:55 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:22.386 12:34:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.386 12:34:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.386 12:34:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.386 ************************************ 00:10:22.386 START TEST nvmf_nmic 00:10:22.386 ************************************ 00:10:22.386 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:22.647 * Looking for test storage... 00:10:22.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:22.647 Cannot find device "nvmf_tgt_br" 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.647 Cannot find device "nvmf_tgt_br2" 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:22.647 Cannot find device "nvmf_tgt_br" 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:22.647 Cannot find device "nvmf_tgt_br2" 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.647 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:22.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:22.922 00:10:22.922 --- 10.0.0.2 ping statistics --- 00:10:22.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.922 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:22.922 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:22.922 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:22.922 00:10:22.922 --- 10.0.0.3 ping statistics --- 00:10:22.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.922 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:22.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:22.922 00:10:22.922 --- 10.0.0.1 ping statistics --- 00:10:22.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.922 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.922 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68196 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68196 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68196 ']' 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.923 12:34:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.923 [2024-07-15 12:34:55.564123] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:22.923 [2024-07-15 12:34:55.564457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.182 [2024-07-15 12:34:55.706010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.182 [2024-07-15 12:34:55.825520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.182 [2024-07-15 12:34:55.825564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.182 [2024-07-15 12:34:55.825576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.182 [2024-07-15 12:34:55.825585] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.182 [2024-07-15 12:34:55.825593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.182 [2024-07-15 12:34:55.825723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.182 [2024-07-15 12:34:55.826471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.182 [2024-07-15 12:34:55.826634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.182 [2024-07-15 12:34:55.826639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.440 [2024-07-15 12:34:55.882408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.008 [2024-07-15 12:34:56.619232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.008 Malloc0 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.008 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.267 [2024-07-15 12:34:56.692448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.267 test case1: single bdev can't be used in multiple subsystems 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.267 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.267 [2024-07-15 12:34:56.720307] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:24.267 [2024-07-15 12:34:56.720355] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:24.267 [2024-07-15 12:34:56.720384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.267 request: 00:10:24.267 { 00:10:24.267 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:24.267 "namespace": { 00:10:24.267 "bdev_name": "Malloc0", 00:10:24.267 "no_auto_visible": false 00:10:24.267 }, 00:10:24.267 "method": "nvmf_subsystem_add_ns", 00:10:24.267 "req_id": 1 00:10:24.267 } 00:10:24.267 Got JSON-RPC error response 00:10:24.267 response: 00:10:24.267 { 00:10:24.267 "code": -32602, 00:10:24.267 "message": "Invalid parameters" 00:10:24.267 } 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:24.268 Adding namespace failed - expected result. 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:24.268 test case2: host connect to nvmf target in multiple paths 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.268 [2024-07-15 12:34:56.736392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:24.268 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:24.526 12:34:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.526 12:34:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:24.526 12:34:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.526 12:34:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:24.526 12:34:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.486 12:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.486 12:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.486 12:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.486 12:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.486 12:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.486 12:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:26.486 12:34:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.486 [global] 00:10:26.486 thread=1 00:10:26.486 invalidate=1 00:10:26.486 rw=write 00:10:26.486 time_based=1 00:10:26.486 runtime=1 00:10:26.486 ioengine=libaio 00:10:26.486 direct=1 00:10:26.486 bs=4096 00:10:26.486 iodepth=1 00:10:26.486 norandommap=0 00:10:26.486 numjobs=1 00:10:26.486 00:10:26.486 verify_dump=1 00:10:26.486 verify_backlog=512 00:10:26.486 verify_state_save=0 00:10:26.486 do_verify=1 00:10:26.486 verify=crc32c-intel 00:10:26.486 [job0] 00:10:26.486 filename=/dev/nvme0n1 00:10:26.486 Could not set queue depth (nvme0n1) 00:10:26.745 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.745 fio-3.35 00:10:26.745 Starting 1 thread 00:10:27.680 00:10:27.680 job0: (groupid=0, jobs=1): err= 0: pid=68292: Mon Jul 15 12:35:00 2024 00:10:27.680 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:27.680 slat (nsec): min=15278, max=42616, avg=18043.16, stdev=3074.15 00:10:27.680 clat (usec): min=160, max=601, avg=252.16, stdev=28.31 00:10:27.680 lat (usec): min=176, max=618, avg=270.20, stdev=28.74 00:10:27.680 clat percentiles (usec): 00:10:27.680 | 1.00th=[ 184], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 231], 00:10:27.680 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:10:27.680 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:10:27.680 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 396], 99.95th=[ 465], 00:10:27.680 | 99.99th=[ 603] 00:10:27.680 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(9.93MiB/1001msec); 0 zone resets 00:10:27.680 slat (usec): min=21, max=150, avg=26.63, stdev= 6.17 00:10:27.680 clat (usec): min=95, max=449, avg=145.37, stdev=23.16 00:10:27.680 lat (usec): min=119, max=537, avg=172.00, stdev=24.54 00:10:27.680 clat percentiles (usec): 00:10:27.680 | 1.00th=[ 105], 5.00th=[ 114], 10.00th=[ 121], 20.00th=[ 130], 00:10:27.680 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:10:27.680 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 178], 00:10:27.680 | 99.00th=[ 210], 99.50th=[ 258], 99.90th=[ 367], 99.95th=[ 388], 00:10:27.680 | 99.99th=[ 449] 00:10:27.680 bw ( KiB/s): min= 9784, max= 9784, per=96.32%, avg=9784.00, stdev= 0.00, samples=1 00:10:27.680 iops : min= 2446, max= 2446, avg=2446.00, stdev= 0.00, samples=1 00:10:27.680 lat (usec) : 100=0.22%, 250=74.49%, 500=25.27%, 750=0.02% 00:10:27.680 cpu : usr=2.30%, sys=7.80%, ctx=4591, majf=0, minf=2 00:10:27.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.680 issued rwts: total=2048,2542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.680 00:10:27.680 Run status group 0 (all jobs): 00:10:27.680 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:27.680 WRITE: bw=9.92MiB/s (10.4MB/s), 9.92MiB/s-9.92MiB/s (10.4MB/s-10.4MB/s), io=9.93MiB (10.4MB), run=1001-1001msec 00:10:27.680 00:10:27.680 Disk stats (read/write): 00:10:27.680 nvme0n1: ios=2042/2048, merge=0/0, ticks=533/320, in_queue=853, util=91.48% 00:10:27.680 12:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.938 rmmod nvme_tcp 00:10:27.938 rmmod nvme_fabrics 00:10:27.938 rmmod nvme_keyring 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68196 ']' 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68196 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68196 ']' 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68196 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68196 00:10:27.938 killing process with pid 68196 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68196' 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68196 00:10:27.938 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68196 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:28.195 00:10:28.195 real 0m5.771s 00:10:28.195 user 0m18.736s 00:10:28.195 sys 0m1.996s 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.195 ************************************ 00:10:28.195 END TEST nvmf_nmic 00:10:28.195 ************************************ 00:10:28.195 12:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.195 12:35:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:28.195 12:35:00 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:28.195 12:35:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:28.195 12:35:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.195 12:35:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.195 ************************************ 00:10:28.195 START TEST nvmf_fio_target 00:10:28.195 ************************************ 00:10:28.195 12:35:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:28.452 * Looking for test storage... 00:10:28.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.452 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:28.453 12:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:28.453 Cannot find device "nvmf_tgt_br" 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:28.453 Cannot find device "nvmf_tgt_br2" 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:28.453 Cannot find device "nvmf_tgt_br" 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:28.453 Cannot find device "nvmf_tgt_br2" 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:28.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:28.453 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:28.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:28.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:10:28.710 00:10:28.710 --- 10.0.0.2 ping statistics --- 00:10:28.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.710 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:28.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:10:28.710 00:10:28.710 --- 10.0.0.3 ping statistics --- 00:10:28.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.710 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:28.710 00:10:28.710 --- 10.0.0.1 ping statistics --- 00:10:28.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.710 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68471 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68471 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68471 ']' 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.710 12:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.968 [2024-07-15 12:35:01.426061] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:28.968 [2024-07-15 12:35:01.426147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.968 [2024-07-15 12:35:01.563190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.226 [2024-07-15 12:35:01.680152] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.226 [2024-07-15 12:35:01.680437] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.226 [2024-07-15 12:35:01.680545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.226 [2024-07-15 12:35:01.680560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.226 [2024-07-15 12:35:01.680571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.226 [2024-07-15 12:35:01.680708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.226 [2024-07-15 12:35:01.681233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.226 [2024-07-15 12:35:01.681403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.226 [2024-07-15 12:35:01.681410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.226 [2024-07-15 12:35:01.737932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:29.792 12:35:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.792 12:35:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:29.792 12:35:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:29.792 12:35:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:29.792 12:35:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.792 12:35:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.792 12:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:30.051 [2024-07-15 12:35:02.611717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.051 12:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.309 12:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:30.309 12:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.567 12:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:30.567 12:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.825 12:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:30.825 12:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.390 12:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:31.390 12:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:31.648 12:35:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.907 12:35:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:31.907 12:35:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.164 12:35:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:32.164 12:35:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.422 12:35:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:32.422 12:35:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:32.680 12:35:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:32.680 12:35:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:32.680 12:35:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.938 12:35:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:32.938 12:35:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.195 12:35:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.452 [2024-07-15 12:35:06.030339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.452 12:35:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:33.710 12:35:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:33.968 12:35:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.225 12:35:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:34.225 12:35:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:34.225 12:35:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.225 12:35:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:34.225 12:35:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:34.225 12:35:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:36.138 12:35:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:36.138 12:35:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:36.138 12:35:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.138 12:35:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:36.138 12:35:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.138 12:35:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:36.138 12:35:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:36.138 [global] 00:10:36.138 thread=1 00:10:36.138 invalidate=1 00:10:36.138 rw=write 00:10:36.138 time_based=1 00:10:36.138 runtime=1 00:10:36.138 ioengine=libaio 00:10:36.138 direct=1 00:10:36.138 bs=4096 00:10:36.138 iodepth=1 00:10:36.138 norandommap=0 00:10:36.138 numjobs=1 00:10:36.138 00:10:36.138 verify_dump=1 00:10:36.138 verify_backlog=512 00:10:36.138 verify_state_save=0 00:10:36.138 do_verify=1 00:10:36.138 verify=crc32c-intel 00:10:36.138 [job0] 00:10:36.138 filename=/dev/nvme0n1 00:10:36.138 [job1] 00:10:36.138 filename=/dev/nvme0n2 00:10:36.138 [job2] 00:10:36.138 filename=/dev/nvme0n3 00:10:36.138 [job3] 00:10:36.138 filename=/dev/nvme0n4 00:10:36.138 Could not set queue depth (nvme0n1) 00:10:36.138 Could not set queue depth (nvme0n2) 00:10:36.138 Could not set queue depth (nvme0n3) 00:10:36.138 Could not set queue depth (nvme0n4) 00:10:36.395 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.395 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.395 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.395 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.395 fio-3.35 00:10:36.395 Starting 4 threads 00:10:37.768 00:10:37.768 job0: (groupid=0, jobs=1): err= 0: pid=68657: Mon Jul 15 12:35:10 2024 00:10:37.768 read: IOPS=2573, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec) 00:10:37.768 slat (nsec): min=13704, max=38271, avg=16156.65, stdev=2194.53 00:10:37.768 clat (usec): min=156, max=252, avg=185.94, stdev=12.79 00:10:37.768 lat (usec): min=171, max=271, avg=202.10, stdev=13.18 00:10:37.768 clat percentiles (usec): 00:10:37.768 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:10:37.768 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:10:37.768 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:10:37.768 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 239], 99.95th=[ 249], 00:10:37.768 | 99.99th=[ 253] 00:10:37.768 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:37.768 slat (nsec): min=16188, max=80875, avg=23846.79, stdev=4933.11 00:10:37.768 clat (usec): min=100, max=603, avg=128.71, stdev=18.20 00:10:37.768 lat (usec): min=122, max=630, avg=152.56, stdev=19.75 00:10:37.768 clat percentiles (usec): 00:10:37.768 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 118], 00:10:37.768 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 130], 00:10:37.768 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 155], 00:10:37.768 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 293], 99.95th=[ 490], 00:10:37.768 | 99.99th=[ 603] 00:10:37.768 bw ( KiB/s): min=12263, max=12263, per=32.45%, avg=12263.00, stdev= 0.00, samples=1 00:10:37.768 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:37.768 lat (usec) : 250=99.89%, 500=0.09%, 750=0.02% 00:10:37.768 cpu : usr=2.80%, sys=8.50%, ctx=5652, majf=0, minf=9 00:10:37.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.768 issued rwts: total=2576,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.768 job1: (groupid=0, jobs=1): err= 0: pid=68658: Mon Jul 15 12:35:10 2024 00:10:37.768 read: IOPS=2586, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:10:37.768 slat (nsec): min=12645, max=97970, avg=15268.69, stdev=3086.21 00:10:37.768 clat (usec): min=154, max=494, avg=186.69, stdev=14.51 00:10:37.768 lat (usec): min=169, max=511, avg=201.96, stdev=15.10 00:10:37.768 clat percentiles (usec): 00:10:37.768 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:10:37.768 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:10:37.768 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 204], 95.00th=[ 210], 00:10:37.768 | 99.00th=[ 227], 99.50th=[ 233], 99.90th=[ 243], 99.95th=[ 253], 00:10:37.768 | 99.99th=[ 494] 00:10:37.768 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:37.768 slat (usec): min=14, max=118, avg=22.59, stdev= 6.20 00:10:37.768 clat (usec): min=101, max=357, avg=129.58, stdev=14.83 00:10:37.768 lat (usec): min=121, max=419, avg=152.17, stdev=17.90 00:10:37.768 clat percentiles (usec): 00:10:37.768 | 1.00th=[ 108], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 118], 00:10:37.768 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 131], 00:10:37.768 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:10:37.768 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 196], 99.95th=[ 302], 00:10:37.768 | 99.99th=[ 359] 00:10:37.768 bw ( KiB/s): min=12288, max=12288, per=32.51%, avg=12288.00, stdev= 0.00, samples=1 00:10:37.768 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:37.768 lat (usec) : 250=99.93%, 500=0.07% 00:10:37.768 cpu : usr=2.10%, sys=8.60%, ctx=5662, majf=0, minf=4 00:10:37.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.768 issued rwts: total=2589,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.768 job2: (groupid=0, jobs=1): err= 0: pid=68659: Mon Jul 15 12:35:10 2024 00:10:37.768 read: IOPS=1519, BW=6078KiB/s (6224kB/s)(6084KiB/1001msec) 00:10:37.768 slat (nsec): min=15228, max=67322, avg=22836.33, stdev=6810.42 00:10:37.768 clat (usec): min=236, max=2624, avg=347.07, stdev=87.05 00:10:37.768 lat (usec): min=255, max=2652, avg=369.91, stdev=89.75 00:10:37.768 clat percentiles (usec): 00:10:37.768 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:10:37.768 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:10:37.768 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 445], 95.00th=[ 486], 00:10:37.768 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 857], 99.95th=[ 2638], 00:10:37.768 | 99.99th=[ 2638] 00:10:37.768 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:37.768 slat (nsec): min=21024, max=87375, avg=33670.98, stdev=8810.84 00:10:37.768 clat (usec): min=129, max=651, avg=245.86, stdev=48.59 00:10:37.768 lat (usec): min=157, max=686, avg=279.53, stdev=52.83 00:10:37.768 clat percentiles (usec): 00:10:37.768 | 1.00th=[ 145], 5.00th=[ 174], 10.00th=[ 208], 20.00th=[ 221], 00:10:37.768 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:10:37.768 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 367], 00:10:37.768 | 99.00th=[ 412], 99.50th=[ 441], 99.90th=[ 482], 99.95th=[ 652], 00:10:37.768 | 99.99th=[ 652] 00:10:37.768 bw ( KiB/s): min= 8175, max= 8175, per=21.63%, avg=8175.00, stdev= 0.00, samples=1 00:10:37.768 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:37.768 lat (usec) : 250=34.05%, 500=63.72%, 750=2.13%, 1000=0.07% 00:10:37.768 lat (msec) : 4=0.03% 00:10:37.768 cpu : usr=2.10%, sys=6.50%, ctx=3060, majf=0, minf=13 00:10:37.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.768 issued rwts: total=1521,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.768 job3: (groupid=0, jobs=1): err= 0: pid=68660: Mon Jul 15 12:35:10 2024 00:10:37.768 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:37.768 slat (nsec): min=13994, max=63426, avg=21460.92, stdev=6192.96 00:10:37.768 clat (usec): min=176, max=910, avg=325.13, stdev=52.39 00:10:37.768 lat (usec): min=193, max=928, avg=346.59, stdev=54.85 00:10:37.768 clat percentiles (usec): 00:10:37.768 | 1.00th=[ 208], 5.00th=[ 269], 10.00th=[ 289], 20.00th=[ 302], 00:10:37.768 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:10:37.768 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 355], 95.00th=[ 400], 00:10:37.768 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 693], 99.95th=[ 914], 00:10:37.768 | 99.99th=[ 914] 00:10:37.768 write: IOPS=1776, BW=7105KiB/s (7275kB/s)(7112KiB/1001msec); 0 zone resets 00:10:37.768 slat (usec): min=21, max=129, avg=31.97, stdev= 8.23 00:10:37.768 clat (usec): min=113, max=547, avg=226.36, stdev=40.06 00:10:37.768 lat (usec): min=138, max=677, avg=258.33, stdev=43.09 00:10:37.768 clat percentiles (usec): 00:10:37.768 | 1.00th=[ 131], 5.00th=[ 145], 10.00th=[ 159], 20.00th=[ 206], 00:10:37.768 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 239], 00:10:37.768 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:10:37.768 | 99.00th=[ 306], 99.50th=[ 363], 99.90th=[ 486], 99.95th=[ 545], 00:10:37.768 | 99.99th=[ 545] 00:10:37.768 bw ( KiB/s): min= 8192, max= 8192, per=21.68%, avg=8192.00, stdev= 0.00, samples=1 00:10:37.768 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:37.768 lat (usec) : 250=42.94%, 500=56.25%, 750=0.78%, 1000=0.03% 00:10:37.768 cpu : usr=2.30%, sys=6.50%, ctx=3314, majf=0, minf=11 00:10:37.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.768 issued rwts: total=1536,1778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.768 00:10:37.768 Run status group 0 (all jobs): 00:10:37.768 READ: bw=32.1MiB/s (33.6MB/s), 6078KiB/s-10.1MiB/s (6224kB/s-10.6MB/s), io=32.1MiB (33.7MB), run=1001-1001msec 00:10:37.768 WRITE: bw=36.9MiB/s (38.7MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=36.9MiB (38.7MB), run=1001-1001msec 00:10:37.768 00:10:37.768 Disk stats (read/write): 00:10:37.768 nvme0n1: ios=2239/2560, merge=0/0, ticks=439/343, in_queue=782, util=86.65% 00:10:37.768 nvme0n2: ios=2228/2560, merge=0/0, ticks=460/365, in_queue=825, util=87.26% 00:10:37.768 nvme0n3: ios=1157/1536, merge=0/0, ticks=385/406, in_queue=791, util=88.48% 00:10:37.768 nvme0n4: ios=1244/1536, merge=0/0, ticks=404/372, in_queue=776, util=89.53% 00:10:37.768 12:35:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:37.768 [global] 00:10:37.768 thread=1 00:10:37.768 invalidate=1 00:10:37.768 rw=randwrite 00:10:37.768 time_based=1 00:10:37.768 runtime=1 00:10:37.768 ioengine=libaio 00:10:37.768 direct=1 00:10:37.768 bs=4096 00:10:37.768 iodepth=1 00:10:37.769 norandommap=0 00:10:37.769 numjobs=1 00:10:37.769 00:10:37.769 verify_dump=1 00:10:37.769 verify_backlog=512 00:10:37.769 verify_state_save=0 00:10:37.769 do_verify=1 00:10:37.769 verify=crc32c-intel 00:10:37.769 [job0] 00:10:37.769 filename=/dev/nvme0n1 00:10:37.769 [job1] 00:10:37.769 filename=/dev/nvme0n2 00:10:37.769 [job2] 00:10:37.769 filename=/dev/nvme0n3 00:10:37.769 [job3] 00:10:37.769 filename=/dev/nvme0n4 00:10:37.769 Could not set queue depth (nvme0n1) 00:10:37.769 Could not set queue depth (nvme0n2) 00:10:37.769 Could not set queue depth (nvme0n3) 00:10:37.769 Could not set queue depth (nvme0n4) 00:10:37.769 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.769 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.769 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.769 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.769 fio-3.35 00:10:37.769 Starting 4 threads 00:10:39.140 00:10:39.140 job0: (groupid=0, jobs=1): err= 0: pid=68718: Mon Jul 15 12:35:11 2024 00:10:39.140 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:39.140 slat (usec): min=13, max=126, avg=17.80, stdev= 6.08 00:10:39.140 clat (usec): min=95, max=643, avg=201.11, stdev=23.73 00:10:39.140 lat (usec): min=175, max=659, avg=218.91, stdev=24.15 00:10:39.140 clat percentiles (usec): 00:10:39.140 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:10:39.140 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:10:39.140 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 237], 00:10:39.140 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 375], 99.95th=[ 545], 00:10:39.140 | 99.99th=[ 644] 00:10:39.140 write: IOPS=2573, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec); 0 zone resets 00:10:39.140 slat (usec): min=18, max=200, avg=24.66, stdev= 6.37 00:10:39.140 clat (usec): min=95, max=456, avg=141.90, stdev=17.83 00:10:39.140 lat (usec): min=118, max=477, avg=166.56, stdev=19.36 00:10:39.140 clat percentiles (usec): 00:10:39.140 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 128], 00:10:39.140 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:10:39.140 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 172], 00:10:39.140 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 237], 99.95th=[ 253], 00:10:39.140 | 99.99th=[ 457] 00:10:39.140 bw ( KiB/s): min=12288, max=12288, per=33.24%, avg=12288.00, stdev= 0.00, samples=1 00:10:39.140 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:39.140 lat (usec) : 100=0.04%, 250=98.93%, 500=0.99%, 750=0.04% 00:10:39.140 cpu : usr=2.20%, sys=8.60%, ctx=5149, majf=0, minf=5 00:10:39.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.140 issued rwts: total=2560,2576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.140 job1: (groupid=0, jobs=1): err= 0: pid=68719: Mon Jul 15 12:35:11 2024 00:10:39.140 read: IOPS=1643, BW=6572KiB/s (6730kB/s)(6572KiB/1000msec) 00:10:39.140 slat (nsec): min=12778, max=83846, avg=17139.38, stdev=6137.67 00:10:39.140 clat (usec): min=204, max=1684, avg=288.71, stdev=44.30 00:10:39.140 lat (usec): min=220, max=1699, avg=305.85, stdev=45.51 00:10:39.140 clat percentiles (usec): 00:10:39.140 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:10:39.140 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:10:39.140 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:10:39.140 | 99.00th=[ 396], 99.50th=[ 486], 99.90th=[ 529], 99.95th=[ 1680], 00:10:39.140 | 99.99th=[ 1680] 00:10:39.140 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:10:39.141 slat (nsec): min=12457, max=91170, avg=23335.25, stdev=7613.56 00:10:39.141 clat (usec): min=114, max=410, avg=216.01, stdev=21.66 00:10:39.141 lat (usec): min=163, max=454, avg=239.35, stdev=24.52 00:10:39.141 clat percentiles (usec): 00:10:39.141 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 198], 00:10:39.141 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 219], 00:10:39.141 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 253], 00:10:39.141 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 359], 00:10:39.141 | 99.99th=[ 412] 00:10:39.141 bw ( KiB/s): min= 8208, max= 8208, per=22.21%, avg=8208.00, stdev= 0.00, samples=1 00:10:39.141 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:10:39.141 lat (usec) : 250=52.59%, 500=47.22%, 750=0.16% 00:10:39.141 lat (msec) : 2=0.03% 00:10:39.141 cpu : usr=1.50%, sys=6.70%, ctx=3691, majf=0, minf=11 00:10:39.141 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.141 issued rwts: total=1643,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.141 job2: (groupid=0, jobs=1): err= 0: pid=68720: Mon Jul 15 12:35:11 2024 00:10:39.141 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:39.141 slat (nsec): min=13255, max=61404, avg=16490.76, stdev=3344.10 00:10:39.141 clat (usec): min=129, max=1653, avg=201.57, stdev=33.19 00:10:39.141 lat (usec): min=173, max=1669, avg=218.07, stdev=33.49 00:10:39.141 clat percentiles (usec): 00:10:39.141 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 00:10:39.141 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 204], 00:10:39.141 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 00:10:39.141 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 285], 99.95th=[ 351], 00:10:39.141 | 99.99th=[ 1647] 00:10:39.141 write: IOPS=2575, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec); 0 zone resets 00:10:39.141 slat (usec): min=17, max=116, avg=24.33, stdev= 6.21 00:10:39.141 clat (usec): min=106, max=600, avg=142.99, stdev=20.49 00:10:39.141 lat (usec): min=128, max=631, avg=167.32, stdev=22.41 00:10:39.141 clat percentiles (usec): 00:10:39.141 | 1.00th=[ 116], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 129], 00:10:39.141 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:10:39.141 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 174], 00:10:39.141 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 424], 99.95th=[ 457], 00:10:39.141 | 99.99th=[ 603] 00:10:39.141 bw ( KiB/s): min=12288, max=12288, per=33.24%, avg=12288.00, stdev= 0.00, samples=1 00:10:39.141 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:39.141 lat (usec) : 250=99.59%, 500=0.37%, 750=0.02% 00:10:39.141 lat (msec) : 2=0.02% 00:10:39.141 cpu : usr=2.90%, sys=7.80%, ctx=5141, majf=0, minf=17 00:10:39.141 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.141 issued rwts: total=2560,2578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.141 job3: (groupid=0, jobs=1): err= 0: pid=68721: Mon Jul 15 12:35:11 2024 00:10:39.141 read: IOPS=1640, BW=6561KiB/s (6719kB/s)(6568KiB/1001msec) 00:10:39.141 slat (usec): min=9, max=197, avg=12.97, stdev= 7.00 00:10:39.141 clat (usec): min=145, max=1619, avg=293.60, stdev=44.25 00:10:39.141 lat (usec): min=247, max=1630, avg=306.57, stdev=45.01 00:10:39.141 clat percentiles (usec): 00:10:39.141 | 1.00th=[ 251], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:10:39.141 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:10:39.141 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 318], 95.00th=[ 334], 00:10:39.141 | 99.00th=[ 408], 99.50th=[ 498], 99.90th=[ 603], 99.95th=[ 1614], 00:10:39.141 | 99.99th=[ 1614] 00:10:39.141 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:39.141 slat (usec): min=11, max=395, avg=20.31, stdev=10.96 00:10:39.141 clat (usec): min=117, max=455, avg=219.57, stdev=23.43 00:10:39.141 lat (usec): min=186, max=684, avg=239.88, stdev=27.04 00:10:39.141 clat percentiles (usec): 00:10:39.141 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:10:39.141 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:10:39.141 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 262], 00:10:39.141 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 363], 99.95th=[ 429], 00:10:39.141 | 99.99th=[ 457] 00:10:39.141 bw ( KiB/s): min= 8192, max= 8192, per=22.16%, avg=8192.00, stdev= 0.00, samples=1 00:10:39.141 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:39.141 lat (usec) : 250=50.79%, 500=49.00%, 750=0.19% 00:10:39.141 lat (msec) : 2=0.03% 00:10:39.141 cpu : usr=1.30%, sys=5.20%, ctx=3694, majf=0, minf=12 00:10:39.141 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.141 issued rwts: total=1642,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.141 00:10:39.141 Run status group 0 (all jobs): 00:10:39.141 READ: bw=32.8MiB/s (34.4MB/s), 6561KiB/s-9.99MiB/s (6719kB/s-10.5MB/s), io=32.8MiB (34.4MB), run=1000-1001msec 00:10:39.141 WRITE: bw=36.1MiB/s (37.8MB/s), 8184KiB/s-10.1MiB/s (8380kB/s-10.5MB/s), io=36.1MiB (37.9MB), run=1000-1001msec 00:10:39.141 00:10:39.141 Disk stats (read/write): 00:10:39.141 nvme0n1: ios=2098/2434, merge=0/0, ticks=447/363, in_queue=810, util=88.58% 00:10:39.141 nvme0n2: ios=1580/1615, merge=0/0, ticks=471/356, in_queue=827, util=88.65% 00:10:39.141 nvme0n3: ios=2048/2420, merge=0/0, ticks=417/375, in_queue=792, util=89.06% 00:10:39.141 nvme0n4: ios=1536/1614, merge=0/0, ticks=414/322, in_queue=736, util=89.71% 00:10:39.141 12:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:39.141 [global] 00:10:39.141 thread=1 00:10:39.141 invalidate=1 00:10:39.141 rw=write 00:10:39.141 time_based=1 00:10:39.141 runtime=1 00:10:39.141 ioengine=libaio 00:10:39.141 direct=1 00:10:39.141 bs=4096 00:10:39.141 iodepth=128 00:10:39.141 norandommap=0 00:10:39.141 numjobs=1 00:10:39.141 00:10:39.141 verify_dump=1 00:10:39.141 verify_backlog=512 00:10:39.141 verify_state_save=0 00:10:39.141 do_verify=1 00:10:39.141 verify=crc32c-intel 00:10:39.141 [job0] 00:10:39.141 filename=/dev/nvme0n1 00:10:39.141 [job1] 00:10:39.141 filename=/dev/nvme0n2 00:10:39.141 [job2] 00:10:39.141 filename=/dev/nvme0n3 00:10:39.141 [job3] 00:10:39.141 filename=/dev/nvme0n4 00:10:39.141 Could not set queue depth (nvme0n1) 00:10:39.141 Could not set queue depth (nvme0n2) 00:10:39.141 Could not set queue depth (nvme0n3) 00:10:39.141 Could not set queue depth (nvme0n4) 00:10:39.141 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.141 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.141 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.141 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.141 fio-3.35 00:10:39.141 Starting 4 threads 00:10:40.517 00:10:40.517 job0: (groupid=0, jobs=1): err= 0: pid=68775: Mon Jul 15 12:35:12 2024 00:10:40.517 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:10:40.517 slat (usec): min=6, max=9024, avg=189.83, stdev=981.85 00:10:40.517 clat (usec): min=14016, max=37628, avg=23894.95, stdev=4836.51 00:10:40.517 lat (usec): min=16847, max=37647, avg=24084.78, stdev=4792.09 00:10:40.517 clat percentiles (usec): 00:10:40.517 | 1.00th=[16909], 5.00th=[18744], 10.00th=[20055], 20.00th=[21103], 00:10:40.517 | 30.00th=[21627], 40.00th=[21890], 50.00th=[21890], 60.00th=[22414], 00:10:40.517 | 70.00th=[23725], 80.00th=[27657], 90.00th=[32113], 95.00th=[32900], 00:10:40.517 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:10:40.517 | 99.99th=[37487] 00:10:40.517 write: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1003msec); 0 zone resets 00:10:40.517 slat (usec): min=14, max=9253, avg=166.28, stdev=807.12 00:10:40.517 clat (usec): min=1175, max=38300, avg=22086.09, stdev=6172.98 00:10:40.517 lat (usec): min=5307, max=38358, avg=22252.38, stdev=6147.31 00:10:40.517 clat percentiles (usec): 00:10:40.517 | 1.00th=[ 6194], 5.00th=[16319], 10.00th=[16712], 20.00th=[17433], 00:10:40.517 | 30.00th=[18220], 40.00th=[19006], 50.00th=[20055], 60.00th=[22414], 00:10:40.517 | 70.00th=[23725], 80.00th=[26608], 90.00th=[31589], 95.00th=[36439], 00:10:40.517 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:10:40.517 | 99.99th=[38536] 00:10:40.517 bw ( KiB/s): min=10730, max=11807, per=22.91%, avg=11268.50, stdev=761.55, samples=2 00:10:40.517 iops : min= 2682, max= 2951, avg=2816.50, stdev=190.21, samples=2 00:10:40.517 lat (msec) : 2=0.02%, 10=0.58%, 20=31.06%, 50=68.34% 00:10:40.517 cpu : usr=3.09%, sys=9.88%, ctx=173, majf=0, minf=6 00:10:40.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:40.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.517 issued rwts: total=2560,2945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.517 job1: (groupid=0, jobs=1): err= 0: pid=68776: Mon Jul 15 12:35:12 2024 00:10:40.517 read: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec) 00:10:40.517 slat (usec): min=6, max=10385, avg=248.36, stdev=1030.46 00:10:40.517 clat (usec): min=18423, max=55088, avg=29690.27, stdev=5767.13 00:10:40.517 lat (usec): min=18449, max=55173, avg=29938.63, stdev=5874.22 00:10:40.517 clat percentiles (usec): 00:10:40.517 | 1.00th=[19792], 5.00th=[22676], 10.00th=[23462], 20.00th=[24249], 00:10:40.517 | 30.00th=[25035], 40.00th=[27395], 50.00th=[28181], 60.00th=[31065], 00:10:40.517 | 70.00th=[33817], 80.00th=[34866], 90.00th=[35914], 95.00th=[39060], 00:10:40.517 | 99.00th=[47449], 99.50th=[51119], 99.90th=[51119], 99.95th=[55313], 00:10:40.517 | 99.99th=[55313] 00:10:40.517 write: IOPS=1870, BW=7483KiB/s (7663kB/s)(7528KiB/1006msec); 0 zone resets 00:10:40.517 slat (usec): min=13, max=10230, avg=320.22, stdev=1078.79 00:10:40.517 clat (usec): min=3977, max=72837, avg=42875.53, stdev=13630.75 00:10:40.517 lat (usec): min=8931, max=72867, avg=43195.75, stdev=13687.50 00:10:40.517 clat percentiles (usec): 00:10:40.517 | 1.00th=[18482], 5.00th=[24511], 10.00th=[26346], 20.00th=[27919], 00:10:40.517 | 30.00th=[32113], 40.00th=[41681], 50.00th=[43254], 60.00th=[44827], 00:10:40.517 | 70.00th=[49546], 80.00th=[55837], 90.00th=[62653], 95.00th=[66323], 00:10:40.517 | 99.00th=[71828], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:10:40.517 | 99.99th=[72877] 00:10:40.517 bw ( KiB/s): min= 5828, max= 8208, per=14.27%, avg=7018.00, stdev=1682.91, samples=2 00:10:40.517 iops : min= 1457, max= 2052, avg=1754.50, stdev=420.73, samples=2 00:10:40.517 lat (msec) : 4=0.03%, 10=0.23%, 20=0.94%, 50=82.77%, 100=16.03% 00:10:40.517 cpu : usr=1.89%, sys=6.97%, ctx=256, majf=0, minf=5 00:10:40.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:10:40.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.517 issued rwts: total=1536,1882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.517 job2: (groupid=0, jobs=1): err= 0: pid=68777: Mon Jul 15 12:35:12 2024 00:10:40.517 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:10:40.517 slat (usec): min=8, max=8452, avg=151.31, stdev=668.39 00:10:40.517 clat (usec): min=854, max=27012, avg=19478.31, stdev=1850.02 00:10:40.517 lat (usec): min=7182, max=27531, avg=19629.61, stdev=1825.62 00:10:40.517 clat percentiles (usec): 00:10:40.517 | 1.00th=[15008], 5.00th=[16581], 10.00th=[17695], 20.00th=[18744], 00:10:40.517 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 00:10:40.517 | 70.00th=[20055], 80.00th=[20317], 90.00th=[21103], 95.00th=[22414], 00:10:40.517 | 99.00th=[25297], 99.50th=[26084], 99.90th=[26084], 99.95th=[26870], 00:10:40.517 | 99.99th=[27132] 00:10:40.517 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:10:40.517 slat (usec): min=14, max=8285, avg=141.66, stdev=874.51 00:10:40.517 clat (usec): min=7417, max=28038, avg=18699.83, stdev=2234.39 00:10:40.517 lat (usec): min=7445, max=28096, avg=18841.49, stdev=2379.01 00:10:40.517 clat percentiles (usec): 00:10:40.517 | 1.00th=[ 8586], 5.00th=[15008], 10.00th=[17171], 20.00th=[18220], 00:10:40.517 | 30.00th=[18482], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:10:40.517 | 70.00th=[19530], 80.00th=[19530], 90.00th=[20055], 95.00th=[21890], 00:10:40.517 | 99.00th=[25822], 99.50th=[26608], 99.90th=[27395], 99.95th=[27919], 00:10:40.517 | 99.99th=[27919] 00:10:40.517 bw ( KiB/s): min=13608, max=14107, per=28.18%, avg=13857.50, stdev=352.85, samples=2 00:10:40.517 iops : min= 3402, max= 3526, avg=3464.00, stdev=87.68, samples=2 00:10:40.517 lat (usec) : 1000=0.01% 00:10:40.517 lat (msec) : 10=0.97%, 20=80.85%, 50=18.16% 00:10:40.517 cpu : usr=4.27%, sys=11.23%, ctx=241, majf=0, minf=3 00:10:40.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:40.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.517 issued rwts: total=3084,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.517 job3: (groupid=0, jobs=1): err= 0: pid=68778: Mon Jul 15 12:35:12 2024 00:10:40.517 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:10:40.517 slat (usec): min=10, max=4562, avg=129.46, stdev=619.07 00:10:40.517 clat (usec): min=11762, max=18819, avg=17166.30, stdev=919.70 00:10:40.517 lat (usec): min=14799, max=18852, avg=17295.76, stdev=690.23 00:10:40.517 clat percentiles (usec): 00:10:40.517 | 1.00th=[13566], 5.00th=[15795], 10.00th=[16319], 20.00th=[16581], 00:10:40.517 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:10:40.517 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:10:40.517 | 99.00th=[18744], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:10:40.517 | 99.99th=[18744] 00:10:40.517 write: IOPS=3961, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1002msec); 0 zone resets 00:10:40.517 slat (usec): min=13, max=4189, avg=126.86, stdev=554.12 00:10:40.517 clat (usec): min=268, max=18772, avg=16334.83, stdev=1757.31 00:10:40.517 lat (usec): min=3669, max=18812, avg=16461.69, stdev=1669.79 00:10:40.517 clat percentiles (usec): 00:10:40.517 | 1.00th=[ 7308], 5.00th=[14353], 10.00th=[15664], 20.00th=[15926], 00:10:40.517 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:10:40.517 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18220], 00:10:40.517 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:10:40.517 | 99.99th=[18744] 00:10:40.517 bw ( KiB/s): min=14315, max=16416, per=31.25%, avg=15365.50, stdev=1485.63, samples=2 00:10:40.518 iops : min= 3578, max= 4104, avg=3841.00, stdev=371.94, samples=2 00:10:40.518 lat (usec) : 500=0.01% 00:10:40.518 lat (msec) : 4=0.16%, 10=0.69%, 20=99.14% 00:10:40.518 cpu : usr=4.30%, sys=12.79%, ctx=240, majf=0, minf=1 00:10:40.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:40.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.518 issued rwts: total=3584,3969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.518 00:10:40.518 Run status group 0 (all jobs): 00:10:40.518 READ: bw=41.8MiB/s (43.8MB/s), 6107KiB/s-14.0MiB/s (6254kB/s-14.7MB/s), io=42.0MiB (44.1MB), run=1002-1007msec 00:10:40.518 WRITE: bw=48.0MiB/s (50.4MB/s), 7483KiB/s-15.5MiB/s (7663kB/s-16.2MB/s), io=48.4MiB (50.7MB), run=1002-1007msec 00:10:40.518 00:10:40.518 Disk stats (read/write): 00:10:40.518 nvme0n1: ios=2130/2560, merge=0/0, ticks=12389/12639, in_queue=25028, util=88.37% 00:10:40.518 nvme0n2: ios=1515/1536, merge=0/0, ticks=14628/20366, in_queue=34994, util=89.69% 00:10:40.518 nvme0n3: ios=2623/3072, merge=0/0, ticks=25344/24810, in_queue=50154, util=89.40% 00:10:40.518 nvme0n4: ios=3072/3456, merge=0/0, ticks=11851/12370, in_queue=24221, util=89.76% 00:10:40.518 12:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:40.518 [global] 00:10:40.518 thread=1 00:10:40.518 invalidate=1 00:10:40.518 rw=randwrite 00:10:40.518 time_based=1 00:10:40.518 runtime=1 00:10:40.518 ioengine=libaio 00:10:40.518 direct=1 00:10:40.518 bs=4096 00:10:40.518 iodepth=128 00:10:40.518 norandommap=0 00:10:40.518 numjobs=1 00:10:40.518 00:10:40.518 verify_dump=1 00:10:40.518 verify_backlog=512 00:10:40.518 verify_state_save=0 00:10:40.518 do_verify=1 00:10:40.518 verify=crc32c-intel 00:10:40.518 [job0] 00:10:40.518 filename=/dev/nvme0n1 00:10:40.518 [job1] 00:10:40.518 filename=/dev/nvme0n2 00:10:40.518 [job2] 00:10:40.518 filename=/dev/nvme0n3 00:10:40.518 [job3] 00:10:40.518 filename=/dev/nvme0n4 00:10:40.518 Could not set queue depth (nvme0n1) 00:10:40.518 Could not set queue depth (nvme0n2) 00:10:40.518 Could not set queue depth (nvme0n3) 00:10:40.518 Could not set queue depth (nvme0n4) 00:10:40.518 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.518 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.518 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.518 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.518 fio-3.35 00:10:40.518 Starting 4 threads 00:10:41.892 00:10:41.892 job0: (groupid=0, jobs=1): err= 0: pid=68837: Mon Jul 15 12:35:14 2024 00:10:41.892 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:10:41.892 slat (usec): min=10, max=9054, avg=92.80, stdev=578.13 00:10:41.892 clat (usec): min=4797, max=29144, avg=12880.20, stdev=2792.41 00:10:41.892 lat (usec): min=4813, max=34801, avg=12973.00, stdev=2820.03 00:10:41.892 clat percentiles (usec): 00:10:41.892 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[10552], 20.00th=[11076], 00:10:41.892 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11994], 60.00th=[12911], 00:10:41.892 | 70.00th=[13435], 80.00th=[14091], 90.00th=[17695], 95.00th=[18744], 00:10:41.892 | 99.00th=[20055], 99.50th=[20055], 99.90th=[28967], 99.95th=[29230], 00:10:41.892 | 99.99th=[29230] 00:10:41.892 write: IOPS=5145, BW=20.1MiB/s (21.1MB/s)(20.1MiB/1002msec); 0 zone resets 00:10:41.892 slat (usec): min=9, max=11100, avg=94.07, stdev=564.72 00:10:41.892 clat (usec): min=1760, max=23464, avg=11842.47, stdev=2859.53 00:10:41.892 lat (usec): min=1786, max=23756, avg=11936.53, stdev=2832.51 00:10:41.892 clat percentiles (usec): 00:10:41.892 | 1.00th=[ 6587], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:10:41.892 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10945], 60.00th=[11600], 00:10:41.892 | 70.00th=[12125], 80.00th=[13042], 90.00th=[16581], 95.00th=[17957], 00:10:41.892 | 99.00th=[20055], 99.50th=[20055], 99.90th=[23462], 99.95th=[23462], 00:10:41.892 | 99.99th=[23462] 00:10:41.892 bw ( KiB/s): min=17947, max=23048, per=33.39%, avg=20497.50, stdev=3606.95, samples=2 00:10:41.893 iops : min= 4486, max= 5762, avg=5124.00, stdev=902.27, samples=2 00:10:41.893 lat (msec) : 2=0.08%, 4=0.26%, 10=13.97%, 20=84.96%, 50=0.72% 00:10:41.893 cpu : usr=4.50%, sys=14.79%, ctx=272, majf=0, minf=11 00:10:41.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:41.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.893 issued rwts: total=5120,5156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.893 job1: (groupid=0, jobs=1): err= 0: pid=68838: Mon Jul 15 12:35:14 2024 00:10:41.893 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(9.87MiB/1001msec) 00:10:41.893 slat (usec): min=7, max=10721, avg=200.13, stdev=860.26 00:10:41.893 clat (usec): min=420, max=37072, avg=24246.62, stdev=4985.78 00:10:41.893 lat (usec): min=436, max=42070, avg=24446.75, stdev=5025.09 00:10:41.893 clat percentiles (usec): 00:10:41.893 | 1.00th=[ 3523], 5.00th=[15664], 10.00th=[19530], 20.00th=[22152], 00:10:41.893 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24773], 00:10:41.893 | 70.00th=[26084], 80.00th=[28443], 90.00th=[29754], 95.00th=[31065], 00:10:41.893 | 99.00th=[32637], 99.50th=[33162], 99.90th=[36439], 99.95th=[36439], 00:10:41.893 | 99.99th=[36963] 00:10:41.893 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:41.893 slat (usec): min=5, max=14040, avg=184.82, stdev=825.28 00:10:41.893 clat (usec): min=11205, max=34235, avg=24552.42, stdev=4124.66 00:10:41.893 lat (usec): min=11234, max=37700, avg=24737.24, stdev=4129.62 00:10:41.893 clat percentiles (usec): 00:10:41.893 | 1.00th=[12518], 5.00th=[17695], 10.00th=[19530], 20.00th=[21627], 00:10:41.893 | 30.00th=[22676], 40.00th=[23462], 50.00th=[24511], 60.00th=[25297], 00:10:41.893 | 70.00th=[26346], 80.00th=[28705], 90.00th=[29754], 95.00th=[31327], 00:10:41.893 | 99.00th=[32637], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:10:41.893 | 99.99th=[34341] 00:10:41.893 bw ( KiB/s): min= 9960, max= 9960, per=16.22%, avg=9960.00, stdev= 0.00, samples=1 00:10:41.893 iops : min= 2490, max= 2490, avg=2490.00, stdev= 0.00, samples=1 00:10:41.893 lat (usec) : 500=0.08%, 750=0.04% 00:10:41.893 lat (msec) : 2=0.04%, 4=0.59%, 10=0.55%, 20=10.58%, 50=88.13% 00:10:41.893 cpu : usr=2.70%, sys=8.00%, ctx=623, majf=0, minf=12 00:10:41.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:41.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.893 issued rwts: total=2527,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.893 job2: (groupid=0, jobs=1): err= 0: pid=68839: Mon Jul 15 12:35:14 2024 00:10:41.893 read: IOPS=4714, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1004msec) 00:10:41.893 slat (usec): min=9, max=8970, avg=94.61, stdev=581.05 00:10:41.893 clat (usec): min=1769, max=28599, avg=13109.65, stdev=2220.69 00:10:41.893 lat (usec): min=5849, max=34034, avg=13204.26, stdev=2241.55 00:10:41.893 clat percentiles (usec): 00:10:41.893 | 1.00th=[ 6718], 5.00th=[10683], 10.00th=[11731], 20.00th=[12125], 00:10:41.893 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:10:41.893 | 70.00th=[13173], 80.00th=[13566], 90.00th=[16188], 95.00th=[18220], 00:10:41.893 | 99.00th=[19530], 99.50th=[20055], 99.90th=[28705], 99.95th=[28705], 00:10:41.893 | 99.99th=[28705] 00:10:41.893 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:41.893 slat (usec): min=5, max=15198, avg=100.46, stdev=615.51 00:10:41.893 clat (usec): min=6404, max=25348, avg=12717.54, stdev=2770.17 00:10:41.893 lat (usec): min=8623, max=25377, avg=12818.00, stdev=2735.45 00:10:41.893 clat percentiles (usec): 00:10:41.893 | 1.00th=[ 8225], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:10:41.893 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:10:41.893 | 70.00th=[12387], 80.00th=[15008], 90.00th=[17171], 95.00th=[17957], 00:10:41.893 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:10:41.893 | 99.99th=[25297] 00:10:41.893 bw ( KiB/s): min=19486, max=21488, per=33.37%, avg=20487.00, stdev=1415.63, samples=2 00:10:41.893 iops : min= 4871, max= 5372, avg=5121.50, stdev=354.26, samples=2 00:10:41.893 lat (msec) : 2=0.01%, 10=3.40%, 20=95.52%, 50=1.07% 00:10:41.893 cpu : usr=4.49%, sys=15.15%, ctx=210, majf=0, minf=11 00:10:41.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:41.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.893 issued rwts: total=4733,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.893 job3: (groupid=0, jobs=1): err= 0: pid=68840: Mon Jul 15 12:35:14 2024 00:10:41.893 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:41.893 slat (usec): min=6, max=11772, avg=200.21, stdev=811.97 00:10:41.893 clat (usec): min=8331, max=44264, avg=25032.45, stdev=4465.20 00:10:41.893 lat (usec): min=8348, max=45452, avg=25232.67, stdev=4493.24 00:10:41.893 clat percentiles (usec): 00:10:41.893 | 1.00th=[11731], 5.00th=[19268], 10.00th=[20055], 20.00th=[22676], 00:10:41.893 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:10:41.893 | 70.00th=[26084], 80.00th=[28967], 90.00th=[30540], 95.00th=[32375], 00:10:41.893 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:10:41.893 | 99.99th=[44303] 00:10:41.893 write: IOPS=2562, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1004msec); 0 zone resets 00:10:41.893 slat (usec): min=6, max=14756, avg=181.08, stdev=818.45 00:10:41.893 clat (usec): min=3533, max=40531, avg=24531.16, stdev=5451.37 00:10:41.893 lat (usec): min=4754, max=46764, avg=24712.24, stdev=5462.70 00:10:41.893 clat percentiles (usec): 00:10:41.893 | 1.00th=[10552], 5.00th=[15401], 10.00th=[17433], 20.00th=[21103], 00:10:41.893 | 30.00th=[22676], 40.00th=[23462], 50.00th=[24773], 60.00th=[25560], 00:10:41.893 | 70.00th=[26608], 80.00th=[27919], 90.00th=[30278], 95.00th=[33424], 00:10:41.893 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:10:41.893 | 99.99th=[40633] 00:10:41.893 bw ( KiB/s): min= 9912, max=10568, per=16.68%, avg=10240.00, stdev=463.86, samples=2 00:10:41.893 iops : min= 2478, max= 2642, avg=2560.00, stdev=115.97, samples=2 00:10:41.893 lat (msec) : 4=0.02%, 10=0.49%, 20=13.23%, 50=86.27% 00:10:41.893 cpu : usr=3.59%, sys=7.18%, ctx=598, majf=0, minf=7 00:10:41.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:41.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.893 issued rwts: total=2560,2573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.893 00:10:41.893 Run status group 0 (all jobs): 00:10:41.893 READ: bw=58.1MiB/s (60.9MB/s), 9.86MiB/s-20.0MiB/s (10.3MB/s-20.9MB/s), io=58.4MiB (61.2MB), run=1001-1004msec 00:10:41.893 WRITE: bw=60.0MiB/s (62.9MB/s), 9.99MiB/s-20.1MiB/s (10.5MB/s-21.1MB/s), io=60.2MiB (63.1MB), run=1001-1004msec 00:10:41.893 00:10:41.893 Disk stats (read/write): 00:10:41.893 nvme0n1: ios=4146/4480, merge=0/0, ticks=51520/50028, in_queue=101548, util=89.18% 00:10:41.893 nvme0n2: ios=2094/2215, merge=0/0, ticks=25304/25591, in_queue=50895, util=86.86% 00:10:41.893 nvme0n3: ios=4113/4222, merge=0/0, ticks=51016/50147, in_queue=101163, util=89.71% 00:10:41.893 nvme0n4: ios=2048/2329, merge=0/0, ticks=26053/26570, in_queue=52623, util=89.02% 00:10:41.893 12:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:41.893 12:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68854 00:10:41.893 12:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:41.893 12:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:41.893 [global] 00:10:41.893 thread=1 00:10:41.893 invalidate=1 00:10:41.893 rw=read 00:10:41.893 time_based=1 00:10:41.893 runtime=10 00:10:41.893 ioengine=libaio 00:10:41.893 direct=1 00:10:41.893 bs=4096 00:10:41.893 iodepth=1 00:10:41.893 norandommap=1 00:10:41.893 numjobs=1 00:10:41.893 00:10:41.893 [job0] 00:10:41.893 filename=/dev/nvme0n1 00:10:41.893 [job1] 00:10:41.893 filename=/dev/nvme0n2 00:10:41.893 [job2] 00:10:41.893 filename=/dev/nvme0n3 00:10:41.893 [job3] 00:10:41.893 filename=/dev/nvme0n4 00:10:41.893 Could not set queue depth (nvme0n1) 00:10:41.893 Could not set queue depth (nvme0n2) 00:10:41.893 Could not set queue depth (nvme0n3) 00:10:41.893 Could not set queue depth (nvme0n4) 00:10:41.893 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.893 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.893 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.893 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.893 fio-3.35 00:10:41.893 Starting 4 threads 00:10:45.227 12:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:45.227 fio: pid=68897, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:45.227 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=57442304, buflen=4096 00:10:45.227 12:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:45.227 fio: pid=68896, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:45.227 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=43687936, buflen=4096 00:10:45.227 12:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.227 12:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:45.507 fio: pid=68894, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:45.507 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=65925120, buflen=4096 00:10:45.507 12:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.507 12:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:45.779 fio: pid=68895, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:45.779 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=52383744, buflen=4096 00:10:45.779 00:10:45.779 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68894: Mon Jul 15 12:35:18 2024 00:10:45.779 read: IOPS=4680, BW=18.3MiB/s (19.2MB/s)(62.9MiB/3439msec) 00:10:45.779 slat (usec): min=12, max=15470, avg=18.11, stdev=193.62 00:10:45.779 clat (usec): min=134, max=2672, avg=194.02, stdev=60.65 00:10:45.779 lat (usec): min=148, max=15754, avg=212.14, stdev=208.62 00:10:45.779 clat percentiles (usec): 00:10:45.779 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:10:45.779 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 176], 00:10:45.779 | 70.00th=[ 229], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 289], 00:10:45.779 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 441], 99.95th=[ 766], 00:10:45.779 | 99.99th=[ 2114] 00:10:45.779 bw ( KiB/s): min=13464, max=22984, per=33.58%, avg=19434.67, stdev=4464.72, samples=6 00:10:45.779 iops : min= 3366, max= 5746, avg=4858.67, stdev=1116.18, samples=6 00:10:45.779 lat (usec) : 250=79.89%, 500=20.02%, 750=0.02%, 1000=0.02% 00:10:45.779 lat (msec) : 2=0.02%, 4=0.02% 00:10:45.779 cpu : usr=1.66%, sys=6.22%, ctx=16112, majf=0, minf=1 00:10:45.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.779 issued rwts: total=16096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.779 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68895: Mon Jul 15 12:35:18 2024 00:10:45.779 read: IOPS=3453, BW=13.5MiB/s (14.1MB/s)(50.0MiB/3703msec) 00:10:45.779 slat (usec): min=13, max=18957, avg=22.06, stdev=237.21 00:10:45.779 clat (usec): min=135, max=2530, avg=265.73, stdev=68.31 00:10:45.779 lat (usec): min=149, max=19227, avg=287.80, stdev=247.82 00:10:45.779 clat percentiles (usec): 00:10:45.779 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 245], 00:10:45.779 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:10:45.779 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 318], 95.00th=[ 343], 00:10:45.779 | 99.00th=[ 404], 99.50th=[ 537], 99.90th=[ 930], 99.95th=[ 1172], 00:10:45.779 | 99.99th=[ 2147] 00:10:45.779 bw ( KiB/s): min=12088, max=14816, per=23.54%, avg=13624.00, stdev=1040.75, samples=7 00:10:45.779 iops : min= 3022, max= 3704, avg=3406.00, stdev=260.19, samples=7 00:10:45.779 lat (usec) : 250=26.39%, 500=73.03%, 750=0.38%, 1000=0.12% 00:10:45.779 lat (msec) : 2=0.05%, 4=0.02% 00:10:45.779 cpu : usr=1.38%, sys=4.94%, ctx=12798, majf=0, minf=1 00:10:45.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.779 issued rwts: total=12790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.779 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68896: Mon Jul 15 12:35:18 2024 00:10:45.779 read: IOPS=3326, BW=13.0MiB/s (13.6MB/s)(41.7MiB/3207msec) 00:10:45.779 slat (usec): min=13, max=7700, avg=18.77, stdev=102.64 00:10:45.779 clat (usec): min=158, max=2211, avg=279.95, stdev=57.07 00:10:45.779 lat (usec): min=173, max=8089, avg=298.72, stdev=118.61 00:10:45.779 clat percentiles (usec): 00:10:45.779 | 1.00th=[ 219], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:10:45.779 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:10:45.779 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 347], 00:10:45.779 | 99.00th=[ 429], 99.50th=[ 506], 99.90th=[ 979], 99.95th=[ 1188], 00:10:45.779 | 99.99th=[ 2114] 00:10:45.779 bw ( KiB/s): min=12136, max=14200, per=23.18%, avg=13412.00, stdev=952.90, samples=6 00:10:45.779 iops : min= 3034, max= 3550, avg=3353.00, stdev=238.23, samples=6 00:10:45.779 lat (usec) : 250=13.89%, 500=85.57%, 750=0.38%, 1000=0.06% 00:10:45.779 lat (msec) : 2=0.07%, 4=0.02% 00:10:45.779 cpu : usr=1.00%, sys=5.43%, ctx=10671, majf=0, minf=1 00:10:45.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.779 issued rwts: total=10667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.779 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68897: Mon Jul 15 12:35:18 2024 00:10:45.779 read: IOPS=4834, BW=18.9MiB/s (19.8MB/s)(54.8MiB/2901msec) 00:10:45.779 slat (usec): min=12, max=105, avg=17.31, stdev= 5.81 00:10:45.779 clat (usec): min=144, max=1719, avg=187.76, stdev=33.20 00:10:45.779 lat (usec): min=159, max=1734, avg=205.07, stdev=35.05 00:10:45.779 clat percentiles (usec): 00:10:45.779 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:10:45.779 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:10:45.779 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 231], 95.00th=[ 251], 00:10:45.779 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 330], 99.95th=[ 469], 00:10:45.779 | 99.99th=[ 750] 00:10:45.779 bw ( KiB/s): min=16200, max=21376, per=34.14%, avg=19760.00, stdev=2103.77, samples=5 00:10:45.779 iops : min= 4050, max= 5344, avg=4940.00, stdev=525.94, samples=5 00:10:45.779 lat (usec) : 250=94.67%, 500=5.28%, 750=0.04% 00:10:45.779 lat (msec) : 2=0.01% 00:10:45.779 cpu : usr=2.00%, sys=7.34%, ctx=14026, majf=0, minf=1 00:10:45.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.779 issued rwts: total=14025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.779 00:10:45.779 Run status group 0 (all jobs): 00:10:45.779 READ: bw=56.5MiB/s (59.3MB/s), 13.0MiB/s-18.9MiB/s (13.6MB/s-19.8MB/s), io=209MiB (219MB), run=2901-3703msec 00:10:45.779 00:10:45.779 Disk stats (read/write): 00:10:45.779 nvme0n1: ios=15850/0, merge=0/0, ticks=3089/0, in_queue=3089, util=95.05% 00:10:45.779 nvme0n2: ios=12347/0, merge=0/0, ticks=3360/0, in_queue=3360, util=95.26% 00:10:45.779 nvme0n3: ios=10404/0, merge=0/0, ticks=2951/0, in_queue=2951, util=96.34% 00:10:45.779 nvme0n4: ios=13887/0, merge=0/0, ticks=2634/0, in_queue=2634, util=96.70% 00:10:45.779 12:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.779 12:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:46.037 12:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.037 12:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:46.295 12:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.295 12:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:46.553 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.553 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:46.812 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.812 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68854 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:47.071 nvmf hotplug test: fio failed as expected 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:47.071 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:47.330 rmmod nvme_tcp 00:10:47.330 rmmod nvme_fabrics 00:10:47.330 rmmod nvme_keyring 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68471 ']' 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68471 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68471 ']' 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68471 00:10:47.330 12:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:47.330 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:47.330 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68471 00:10:47.589 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:47.589 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:47.589 killing process with pid 68471 00:10:47.589 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68471' 00:10:47.589 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68471 00:10:47.589 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68471 00:10:47.589 12:35:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:47.589 12:35:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:47.589 12:35:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:47.849 12:35:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.849 12:35:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:47.849 12:35:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.849 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.849 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.849 12:35:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:47.849 00:10:47.849 real 0m19.444s 00:10:47.849 user 1m13.379s 00:10:47.849 sys 0m9.908s 00:10:47.849 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:47.849 12:35:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.849 ************************************ 00:10:47.849 END TEST nvmf_fio_target 00:10:47.849 ************************************ 00:10:47.849 12:35:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:47.849 12:35:20 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:47.849 12:35:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:47.849 12:35:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.849 12:35:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.849 ************************************ 00:10:47.849 START TEST nvmf_bdevio 00:10:47.849 ************************************ 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:47.849 * Looking for test storage... 00:10:47.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:47.849 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:47.850 Cannot find device "nvmf_tgt_br" 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:47.850 Cannot find device "nvmf_tgt_br2" 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:47.850 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:48.109 Cannot find device "nvmf_tgt_br" 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:48.109 Cannot find device "nvmf_tgt_br2" 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:48.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:48.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:48.109 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:48.368 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:48.368 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:48.368 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:48.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:10:48.369 00:10:48.369 --- 10.0.0.2 ping statistics --- 00:10:48.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.369 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:48.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:48.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:10:48.369 00:10:48.369 --- 10.0.0.3 ping statistics --- 00:10:48.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.369 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:48.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:10:48.369 00:10:48.369 --- 10.0.0.1 ping statistics --- 00:10:48.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.369 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69162 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69162 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69162 ']' 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.369 12:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.369 [2024-07-15 12:35:20.905921] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:48.369 [2024-07-15 12:35:20.906016] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.369 [2024-07-15 12:35:21.043725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.628 [2024-07-15 12:35:21.163885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.628 [2024-07-15 12:35:21.163956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.628 [2024-07-15 12:35:21.163970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.628 [2024-07-15 12:35:21.163979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.628 [2024-07-15 12:35:21.163986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.628 [2024-07-15 12:35:21.164596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:48.628 [2024-07-15 12:35:21.164780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:48.628 [2024-07-15 12:35:21.164903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:48.628 [2024-07-15 12:35:21.164907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.628 [2024-07-15 12:35:21.221262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:49.196 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.196 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:49.196 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.196 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:49.196 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.455 [2024-07-15 12:35:21.916029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.455 Malloc0 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.455 [2024-07-15 12:35:21.978446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:49.455 { 00:10:49.455 "params": { 00:10:49.455 "name": "Nvme$subsystem", 00:10:49.455 "trtype": "$TEST_TRANSPORT", 00:10:49.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.455 "adrfam": "ipv4", 00:10:49.455 "trsvcid": "$NVMF_PORT", 00:10:49.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.455 "hdgst": ${hdgst:-false}, 00:10:49.455 "ddgst": ${ddgst:-false} 00:10:49.455 }, 00:10:49.455 "method": "bdev_nvme_attach_controller" 00:10:49.455 } 00:10:49.455 EOF 00:10:49.455 )") 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:49.455 12:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:49.455 "params": { 00:10:49.455 "name": "Nvme1", 00:10:49.455 "trtype": "tcp", 00:10:49.455 "traddr": "10.0.0.2", 00:10:49.455 "adrfam": "ipv4", 00:10:49.455 "trsvcid": "4420", 00:10:49.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.455 "hdgst": false, 00:10:49.455 "ddgst": false 00:10:49.455 }, 00:10:49.455 "method": "bdev_nvme_attach_controller" 00:10:49.455 }' 00:10:49.455 [2024-07-15 12:35:22.042290] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:49.455 [2024-07-15 12:35:22.042420] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69198 ] 00:10:49.714 [2024-07-15 12:35:22.186507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.714 [2024-07-15 12:35:22.325828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.714 [2024-07-15 12:35:22.326039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.714 [2024-07-15 12:35:22.326192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.715 [2024-07-15 12:35:22.392441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:49.974 I/O targets: 00:10:49.974 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:49.974 00:10:49.974 00:10:49.974 CUnit - A unit testing framework for C - Version 2.1-3 00:10:49.974 http://cunit.sourceforge.net/ 00:10:49.974 00:10:49.974 00:10:49.974 Suite: bdevio tests on: Nvme1n1 00:10:49.974 Test: blockdev write read block ...passed 00:10:49.974 Test: blockdev write zeroes read block ...passed 00:10:49.974 Test: blockdev write zeroes read no split ...passed 00:10:49.974 Test: blockdev write zeroes read split ...passed 00:10:49.974 Test: blockdev write zeroes read split partial ...passed 00:10:49.974 Test: blockdev reset ...[2024-07-15 12:35:22.547985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:49.974 [2024-07-15 12:35:22.548136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f87c0 (9): Bad file descriptor 00:10:49.974 [2024-07-15 12:35:22.558964] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:49.974 passed 00:10:49.974 Test: blockdev write read 8 blocks ...passed 00:10:49.974 Test: blockdev write read size > 128k ...passed 00:10:49.974 Test: blockdev write read invalid size ...passed 00:10:49.974 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:49.974 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:49.974 Test: blockdev write read max offset ...passed 00:10:49.974 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:49.974 Test: blockdev writev readv 8 blocks ...passed 00:10:49.974 Test: blockdev writev readv 30 x 1block ...passed 00:10:49.974 Test: blockdev writev readv block ...passed 00:10:49.974 Test: blockdev writev readv size > 128k ...passed 00:10:49.974 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:49.974 Test: blockdev comparev and writev ...[2024-07-15 12:35:22.571873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.974 [2024-07-15 12:35:22.571994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.572026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.974 [2024-07-15 12:35:22.572044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.572682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.974 [2024-07-15 12:35:22.572722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.572769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.974 [2024-07-15 12:35:22.572786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.573418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.974 [2024-07-15 12:35:22.573465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.573491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.974 [2024-07-15 12:35:22.573507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.574160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.974 [2024-07-15 12:35:22.574200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.574226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.974 [2024-07-15 12:35:22.574242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:49.974 passed 00:10:49.974 Test: blockdev nvme passthru rw ...passed 00:10:49.974 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:35:22.576076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.974 [2024-07-15 12:35:22.576118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.576481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.974 [2024-07-15 12:35:22.576520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.576884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.974 [2024-07-15 12:35:22.576935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:49.974 [2024-07-15 12:35:22.577264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.975 [2024-07-15 12:35:22.577303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:49.975 passed 00:10:49.975 Test: blockdev nvme admin passthru ...passed 00:10:49.975 Test: blockdev copy ...passed 00:10:49.975 00:10:49.975 Run Summary: Type Total Ran Passed Failed Inactive 00:10:49.975 suites 1 1 n/a 0 0 00:10:49.975 tests 23 23 23 0 0 00:10:49.975 asserts 152 152 152 0 n/a 00:10:49.975 00:10:49.975 Elapsed time = 0.150 seconds 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.234 rmmod nvme_tcp 00:10:50.234 rmmod nvme_fabrics 00:10:50.234 rmmod nvme_keyring 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.234 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:50.235 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:50.235 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69162 ']' 00:10:50.235 12:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69162 00:10:50.235 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69162 ']' 00:10:50.235 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69162 00:10:50.235 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:50.235 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:50.235 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69162 00:10:50.494 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:50.494 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:50.494 killing process with pid 69162 00:10:50.494 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69162' 00:10:50.494 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69162 00:10:50.494 12:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69162 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:50.754 00:10:50.754 real 0m2.861s 00:10:50.754 user 0m9.401s 00:10:50.754 sys 0m0.816s 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.754 12:35:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.754 ************************************ 00:10:50.754 END TEST nvmf_bdevio 00:10:50.754 ************************************ 00:10:50.754 12:35:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:50.754 12:35:23 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:50.754 12:35:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:50.754 12:35:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.754 12:35:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.754 ************************************ 00:10:50.754 START TEST nvmf_auth_target 00:10:50.754 ************************************ 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:50.754 * Looking for test storage... 00:10:50.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:50.754 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:50.755 Cannot find device "nvmf_tgt_br" 00:10:50.755 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:50.755 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:50.755 Cannot find device "nvmf_tgt_br2" 00:10:50.755 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:50.755 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:50.755 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:51.013 Cannot find device "nvmf_tgt_br" 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:51.013 Cannot find device "nvmf_tgt_br2" 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:51.013 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:51.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:10:51.271 00:10:51.271 --- 10.0.0.2 ping statistics --- 00:10:51.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.271 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:51.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:51.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:51.271 00:10:51.271 --- 10.0.0.3 ping statistics --- 00:10:51.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.271 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:51.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:10:51.271 00:10:51.271 --- 10.0.0.1 ping statistics --- 00:10:51.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.271 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69379 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69379 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69379 ']' 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.271 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69411 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9ada44f3a6168ecbadb3b23256cdc4f577ff13c0e837d3ec 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.M7F 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9ada44f3a6168ecbadb3b23256cdc4f577ff13c0e837d3ec 0 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9ada44f3a6168ecbadb3b23256cdc4f577ff13c0e837d3ec 0 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9ada44f3a6168ecbadb3b23256cdc4f577ff13c0e837d3ec 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:52.218 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.M7F 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.M7F 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.M7F 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4fdfb531ec1beab903afd54b49e16801ef7750a07da7345ee273bb426aa71810 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3U3 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4fdfb531ec1beab903afd54b49e16801ef7750a07da7345ee273bb426aa71810 3 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4fdfb531ec1beab903afd54b49e16801ef7750a07da7345ee273bb426aa71810 3 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4fdfb531ec1beab903afd54b49e16801ef7750a07da7345ee273bb426aa71810 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3U3 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3U3 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.3U3 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:52.477 12:35:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b8e590f661d76a7d3ff7600404ed524f 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.WOR 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b8e590f661d76a7d3ff7600404ed524f 1 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b8e590f661d76a7d3ff7600404ed524f 1 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b8e590f661d76a7d3ff7600404ed524f 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.WOR 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.WOR 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.WOR 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:52.477 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=54cf3fc9d53bc007836bb982e527fc2edad95a2376521b0f 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.25V 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 54cf3fc9d53bc007836bb982e527fc2edad95a2376521b0f 2 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 54cf3fc9d53bc007836bb982e527fc2edad95a2376521b0f 2 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=54cf3fc9d53bc007836bb982e527fc2edad95a2376521b0f 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.25V 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.25V 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.25V 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c479f4ba6fffc9e480460cdbcda5e3d51bbf99e1bd19d131 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Gjz 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c479f4ba6fffc9e480460cdbcda5e3d51bbf99e1bd19d131 2 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c479f4ba6fffc9e480460cdbcda5e3d51bbf99e1bd19d131 2 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c479f4ba6fffc9e480460cdbcda5e3d51bbf99e1bd19d131 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:52.478 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:52.736 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Gjz 00:10:52.736 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Gjz 00:10:52.736 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Gjz 00:10:52.736 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:52.736 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=69ec32c2b2cd3fb1ddd3bfe6d1bab857 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Fhu 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 69ec32c2b2cd3fb1ddd3bfe6d1bab857 1 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 69ec32c2b2cd3fb1ddd3bfe6d1bab857 1 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=69ec32c2b2cd3fb1ddd3bfe6d1bab857 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Fhu 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Fhu 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Fhu 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b04ff53b09d3341e4026b4e49c3c4de0cfed4fd3f9d2e2755c85e06282e4cfb6 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.aXi 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b04ff53b09d3341e4026b4e49c3c4de0cfed4fd3f9d2e2755c85e06282e4cfb6 3 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b04ff53b09d3341e4026b4e49c3c4de0cfed4fd3f9d2e2755c85e06282e4cfb6 3 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b04ff53b09d3341e4026b4e49c3c4de0cfed4fd3f9d2e2755c85e06282e4cfb6 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.aXi 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.aXi 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.aXi 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69379 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69379 ']' 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.737 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.995 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.995 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:52.995 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69411 /var/tmp/host.sock 00:10:52.995 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69411 ']' 00:10:52.995 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:52.996 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:52.996 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:52.996 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.996 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.254 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.M7F 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.M7F 00:10:53.255 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.M7F 00:10:53.513 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.3U3 ]] 00:10:53.513 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3U3 00:10:53.513 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.513 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.513 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.513 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3U3 00:10:53.513 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3U3 00:10:53.771 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:53.771 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.WOR 00:10:53.771 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.771 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.771 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.771 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.WOR 00:10:53.771 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.WOR 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.25V ]] 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.25V 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.25V 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.25V 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Gjz 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Gjz 00:10:54.338 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Gjz 00:10:54.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Fhu ]] 00:10:54.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fhu 00:10:54.596 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.596 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.596 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fhu 00:10:54.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fhu 00:10:54.854 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:54.854 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aXi 00:10:54.854 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.854 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.854 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.854 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.aXi 00:10:54.854 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.aXi 00:10:55.112 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:55.112 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:55.112 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:55.112 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.112 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:55.112 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:55.411 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.412 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.670 00:10:55.670 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.670 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.670 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.930 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.930 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.930 12:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.930 12:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.930 12:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.930 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.930 { 00:10:55.930 "cntlid": 1, 00:10:55.930 "qid": 0, 00:10:55.930 "state": "enabled", 00:10:55.930 "thread": "nvmf_tgt_poll_group_000", 00:10:55.930 "listen_address": { 00:10:55.930 "trtype": "TCP", 00:10:55.930 "adrfam": "IPv4", 00:10:55.930 "traddr": "10.0.0.2", 00:10:55.930 "trsvcid": "4420" 00:10:55.930 }, 00:10:55.930 "peer_address": { 00:10:55.930 "trtype": "TCP", 00:10:55.930 "adrfam": "IPv4", 00:10:55.930 "traddr": "10.0.0.1", 00:10:55.930 "trsvcid": "41586" 00:10:55.930 }, 00:10:55.930 "auth": { 00:10:55.930 "state": "completed", 00:10:55.930 "digest": "sha256", 00:10:55.930 "dhgroup": "null" 00:10:55.930 } 00:10:55.930 } 00:10:55.930 ]' 00:10:55.930 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.188 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.188 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.188 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:56.188 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.188 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.188 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.188 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.448 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.715 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.715 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.715 { 00:11:01.715 "cntlid": 3, 00:11:01.715 "qid": 0, 00:11:01.715 "state": "enabled", 00:11:01.715 "thread": "nvmf_tgt_poll_group_000", 00:11:01.715 "listen_address": { 00:11:01.715 "trtype": "TCP", 00:11:01.715 "adrfam": "IPv4", 00:11:01.715 "traddr": "10.0.0.2", 00:11:01.715 "trsvcid": "4420" 00:11:01.715 }, 00:11:01.715 "peer_address": { 00:11:01.715 "trtype": "TCP", 00:11:01.715 "adrfam": "IPv4", 00:11:01.715 "traddr": "10.0.0.1", 00:11:01.715 "trsvcid": "56650" 00:11:01.715 }, 00:11:01.715 "auth": { 00:11:01.715 "state": "completed", 00:11:01.715 "digest": "sha256", 00:11:01.715 "dhgroup": "null" 00:11:01.715 } 00:11:01.715 } 00:11:01.715 ]' 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:01.715 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.975 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.975 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.975 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.237 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:11:02.803 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.803 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:02.803 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.803 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.803 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.803 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.803 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:02.803 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.061 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.629 00:11:03.629 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.629 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.629 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.888 { 00:11:03.888 "cntlid": 5, 00:11:03.888 "qid": 0, 00:11:03.888 "state": "enabled", 00:11:03.888 "thread": "nvmf_tgt_poll_group_000", 00:11:03.888 "listen_address": { 00:11:03.888 "trtype": "TCP", 00:11:03.888 "adrfam": "IPv4", 00:11:03.888 "traddr": "10.0.0.2", 00:11:03.888 "trsvcid": "4420" 00:11:03.888 }, 00:11:03.888 "peer_address": { 00:11:03.888 "trtype": "TCP", 00:11:03.888 "adrfam": "IPv4", 00:11:03.888 "traddr": "10.0.0.1", 00:11:03.888 "trsvcid": "56680" 00:11:03.888 }, 00:11:03.888 "auth": { 00:11:03.888 "state": "completed", 00:11:03.888 "digest": "sha256", 00:11:03.888 "dhgroup": "null" 00:11:03.888 } 00:11:03.888 } 00:11:03.888 ]' 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.888 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.147 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:11:04.713 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.713 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:04.713 12:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.713 12:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.713 12:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.713 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.713 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:04.713 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:05.288 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:05.288 00:11:05.547 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.547 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.547 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.805 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.805 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.805 12:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.805 12:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.805 12:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.805 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.805 { 00:11:05.805 "cntlid": 7, 00:11:05.805 "qid": 0, 00:11:05.805 "state": "enabled", 00:11:05.805 "thread": "nvmf_tgt_poll_group_000", 00:11:05.805 "listen_address": { 00:11:05.805 "trtype": "TCP", 00:11:05.805 "adrfam": "IPv4", 00:11:05.805 "traddr": "10.0.0.2", 00:11:05.805 "trsvcid": "4420" 00:11:05.805 }, 00:11:05.806 "peer_address": { 00:11:05.806 "trtype": "TCP", 00:11:05.806 "adrfam": "IPv4", 00:11:05.806 "traddr": "10.0.0.1", 00:11:05.806 "trsvcid": "56702" 00:11:05.806 }, 00:11:05.806 "auth": { 00:11:05.806 "state": "completed", 00:11:05.806 "digest": "sha256", 00:11:05.806 "dhgroup": "null" 00:11:05.806 } 00:11:05.806 } 00:11:05.806 ]' 00:11:05.806 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.806 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.806 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.806 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:05.806 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.806 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.806 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.806 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.064 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:11:06.997 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.997 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:06.997 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.997 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.997 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.997 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:06.997 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.997 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:06.997 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.254 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.511 00:11:07.511 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.511 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.511 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.768 { 00:11:07.768 "cntlid": 9, 00:11:07.768 "qid": 0, 00:11:07.768 "state": "enabled", 00:11:07.768 "thread": "nvmf_tgt_poll_group_000", 00:11:07.768 "listen_address": { 00:11:07.768 "trtype": "TCP", 00:11:07.768 "adrfam": "IPv4", 00:11:07.768 "traddr": "10.0.0.2", 00:11:07.768 "trsvcid": "4420" 00:11:07.768 }, 00:11:07.768 "peer_address": { 00:11:07.768 "trtype": "TCP", 00:11:07.768 "adrfam": "IPv4", 00:11:07.768 "traddr": "10.0.0.1", 00:11:07.768 "trsvcid": "56720" 00:11:07.768 }, 00:11:07.768 "auth": { 00:11:07.768 "state": "completed", 00:11:07.768 "digest": "sha256", 00:11:07.768 "dhgroup": "ffdhe2048" 00:11:07.768 } 00:11:07.768 } 00:11:07.768 ]' 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:07.768 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.026 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.026 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.026 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.284 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:11:08.851 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.851 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:08.851 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.851 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.851 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.851 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.851 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:08.851 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.109 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.366 00:11:09.366 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.366 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.366 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.627 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.627 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.627 12:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.627 12:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.627 12:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.627 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.627 { 00:11:09.627 "cntlid": 11, 00:11:09.627 "qid": 0, 00:11:09.627 "state": "enabled", 00:11:09.627 "thread": "nvmf_tgt_poll_group_000", 00:11:09.627 "listen_address": { 00:11:09.627 "trtype": "TCP", 00:11:09.627 "adrfam": "IPv4", 00:11:09.627 "traddr": "10.0.0.2", 00:11:09.627 "trsvcid": "4420" 00:11:09.627 }, 00:11:09.627 "peer_address": { 00:11:09.627 "trtype": "TCP", 00:11:09.627 "adrfam": "IPv4", 00:11:09.627 "traddr": "10.0.0.1", 00:11:09.627 "trsvcid": "42398" 00:11:09.627 }, 00:11:09.627 "auth": { 00:11:09.627 "state": "completed", 00:11:09.627 "digest": "sha256", 00:11:09.627 "dhgroup": "ffdhe2048" 00:11:09.627 } 00:11:09.627 } 00:11:09.627 ]' 00:11:09.627 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.885 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.885 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.885 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:09.885 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.885 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.885 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.885 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.143 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:11:10.710 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.710 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:10.710 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.710 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.710 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.710 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.710 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:10.710 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.970 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.229 00:11:11.229 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.229 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.229 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.798 { 00:11:11.798 "cntlid": 13, 00:11:11.798 "qid": 0, 00:11:11.798 "state": "enabled", 00:11:11.798 "thread": "nvmf_tgt_poll_group_000", 00:11:11.798 "listen_address": { 00:11:11.798 "trtype": "TCP", 00:11:11.798 "adrfam": "IPv4", 00:11:11.798 "traddr": "10.0.0.2", 00:11:11.798 "trsvcid": "4420" 00:11:11.798 }, 00:11:11.798 "peer_address": { 00:11:11.798 "trtype": "TCP", 00:11:11.798 "adrfam": "IPv4", 00:11:11.798 "traddr": "10.0.0.1", 00:11:11.798 "trsvcid": "42426" 00:11:11.798 }, 00:11:11.798 "auth": { 00:11:11.798 "state": "completed", 00:11:11.798 "digest": "sha256", 00:11:11.798 "dhgroup": "ffdhe2048" 00:11:11.798 } 00:11:11.798 } 00:11:11.798 ]' 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.798 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.056 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:11:12.624 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.624 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:12.624 12:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.624 12:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.624 12:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.624 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.624 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:12.624 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:12.895 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:13.153 00:11:13.153 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.153 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.153 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.410 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.410 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.410 12:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.410 12:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.410 12:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.410 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.410 { 00:11:13.410 "cntlid": 15, 00:11:13.410 "qid": 0, 00:11:13.410 "state": "enabled", 00:11:13.410 "thread": "nvmf_tgt_poll_group_000", 00:11:13.410 "listen_address": { 00:11:13.410 "trtype": "TCP", 00:11:13.410 "adrfam": "IPv4", 00:11:13.410 "traddr": "10.0.0.2", 00:11:13.410 "trsvcid": "4420" 00:11:13.410 }, 00:11:13.410 "peer_address": { 00:11:13.410 "trtype": "TCP", 00:11:13.410 "adrfam": "IPv4", 00:11:13.410 "traddr": "10.0.0.1", 00:11:13.410 "trsvcid": "42442" 00:11:13.410 }, 00:11:13.410 "auth": { 00:11:13.410 "state": "completed", 00:11:13.410 "digest": "sha256", 00:11:13.410 "dhgroup": "ffdhe2048" 00:11:13.410 } 00:11:13.410 } 00:11:13.410 ]' 00:11:13.410 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.667 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.667 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.667 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.667 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.667 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.667 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.667 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.925 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.861 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.119 00:11:15.119 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.119 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.119 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.377 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.636 { 00:11:15.636 "cntlid": 17, 00:11:15.636 "qid": 0, 00:11:15.636 "state": "enabled", 00:11:15.636 "thread": "nvmf_tgt_poll_group_000", 00:11:15.636 "listen_address": { 00:11:15.636 "trtype": "TCP", 00:11:15.636 "adrfam": "IPv4", 00:11:15.636 "traddr": "10.0.0.2", 00:11:15.636 "trsvcid": "4420" 00:11:15.636 }, 00:11:15.636 "peer_address": { 00:11:15.636 "trtype": "TCP", 00:11:15.636 "adrfam": "IPv4", 00:11:15.636 "traddr": "10.0.0.1", 00:11:15.636 "trsvcid": "42460" 00:11:15.636 }, 00:11:15.636 "auth": { 00:11:15.636 "state": "completed", 00:11:15.636 "digest": "sha256", 00:11:15.636 "dhgroup": "ffdhe3072" 00:11:15.636 } 00:11:15.636 } 00:11:15.636 ]' 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.636 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.202 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:11:16.769 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.769 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:16.769 12:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.769 12:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.769 12:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.769 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.769 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:16.769 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.026 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.284 00:11:17.284 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.284 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.284 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.542 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.542 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.542 12:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.542 12:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.542 12:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.542 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.542 { 00:11:17.542 "cntlid": 19, 00:11:17.542 "qid": 0, 00:11:17.542 "state": "enabled", 00:11:17.542 "thread": "nvmf_tgt_poll_group_000", 00:11:17.542 "listen_address": { 00:11:17.542 "trtype": "TCP", 00:11:17.542 "adrfam": "IPv4", 00:11:17.542 "traddr": "10.0.0.2", 00:11:17.542 "trsvcid": "4420" 00:11:17.542 }, 00:11:17.542 "peer_address": { 00:11:17.542 "trtype": "TCP", 00:11:17.542 "adrfam": "IPv4", 00:11:17.542 "traddr": "10.0.0.1", 00:11:17.542 "trsvcid": "42482" 00:11:17.542 }, 00:11:17.542 "auth": { 00:11:17.542 "state": "completed", 00:11:17.542 "digest": "sha256", 00:11:17.542 "dhgroup": "ffdhe3072" 00:11:17.542 } 00:11:17.542 } 00:11:17.542 ]' 00:11:17.542 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.800 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.800 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.800 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:17.800 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.800 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.800 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.800 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.058 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:11:18.625 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.625 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:18.625 12:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.625 12:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.625 12:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.625 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.625 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:18.625 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.883 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.451 00:11:19.451 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.451 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.451 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.451 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.451 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.451 12:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.451 12:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.451 12:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.451 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.451 { 00:11:19.451 "cntlid": 21, 00:11:19.451 "qid": 0, 00:11:19.451 "state": "enabled", 00:11:19.451 "thread": "nvmf_tgt_poll_group_000", 00:11:19.451 "listen_address": { 00:11:19.451 "trtype": "TCP", 00:11:19.451 "adrfam": "IPv4", 00:11:19.451 "traddr": "10.0.0.2", 00:11:19.451 "trsvcid": "4420" 00:11:19.451 }, 00:11:19.451 "peer_address": { 00:11:19.451 "trtype": "TCP", 00:11:19.451 "adrfam": "IPv4", 00:11:19.451 "traddr": "10.0.0.1", 00:11:19.451 "trsvcid": "42516" 00:11:19.451 }, 00:11:19.451 "auth": { 00:11:19.451 "state": "completed", 00:11:19.451 "digest": "sha256", 00:11:19.451 "dhgroup": "ffdhe3072" 00:11:19.451 } 00:11:19.451 } 00:11:19.451 ]' 00:11:19.451 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.710 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.710 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.710 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:19.710 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.710 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.710 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.710 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.969 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:11:20.540 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.540 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:20.540 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.540 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.540 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.540 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.540 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:20.540 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:20.804 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:21.370 00:11:21.370 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.370 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.370 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.629 { 00:11:21.629 "cntlid": 23, 00:11:21.629 "qid": 0, 00:11:21.629 "state": "enabled", 00:11:21.629 "thread": "nvmf_tgt_poll_group_000", 00:11:21.629 "listen_address": { 00:11:21.629 "trtype": "TCP", 00:11:21.629 "adrfam": "IPv4", 00:11:21.629 "traddr": "10.0.0.2", 00:11:21.629 "trsvcid": "4420" 00:11:21.629 }, 00:11:21.629 "peer_address": { 00:11:21.629 "trtype": "TCP", 00:11:21.629 "adrfam": "IPv4", 00:11:21.629 "traddr": "10.0.0.1", 00:11:21.629 "trsvcid": "60346" 00:11:21.629 }, 00:11:21.629 "auth": { 00:11:21.629 "state": "completed", 00:11:21.629 "digest": "sha256", 00:11:21.629 "dhgroup": "ffdhe3072" 00:11:21.629 } 00:11:21.629 } 00:11:21.629 ]' 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.629 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.196 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:11:22.763 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.763 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:22.763 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.763 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.763 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.763 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.763 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.763 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:22.763 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.022 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.281 00:11:23.281 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.281 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.281 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.540 { 00:11:23.540 "cntlid": 25, 00:11:23.540 "qid": 0, 00:11:23.540 "state": "enabled", 00:11:23.540 "thread": "nvmf_tgt_poll_group_000", 00:11:23.540 "listen_address": { 00:11:23.540 "trtype": "TCP", 00:11:23.540 "adrfam": "IPv4", 00:11:23.540 "traddr": "10.0.0.2", 00:11:23.540 "trsvcid": "4420" 00:11:23.540 }, 00:11:23.540 "peer_address": { 00:11:23.540 "trtype": "TCP", 00:11:23.540 "adrfam": "IPv4", 00:11:23.540 "traddr": "10.0.0.1", 00:11:23.540 "trsvcid": "60372" 00:11:23.540 }, 00:11:23.540 "auth": { 00:11:23.540 "state": "completed", 00:11:23.540 "digest": "sha256", 00:11:23.540 "dhgroup": "ffdhe4096" 00:11:23.540 } 00:11:23.540 } 00:11:23.540 ]' 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:23.540 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.799 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.799 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.799 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.059 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:11:24.626 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.626 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:24.626 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.626 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.626 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.626 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.626 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:24.626 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.885 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.144 00:11:25.144 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.144 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.144 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.403 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.403 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.403 12:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.403 12:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.403 12:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.403 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.403 { 00:11:25.403 "cntlid": 27, 00:11:25.403 "qid": 0, 00:11:25.403 "state": "enabled", 00:11:25.403 "thread": "nvmf_tgt_poll_group_000", 00:11:25.403 "listen_address": { 00:11:25.403 "trtype": "TCP", 00:11:25.403 "adrfam": "IPv4", 00:11:25.403 "traddr": "10.0.0.2", 00:11:25.403 "trsvcid": "4420" 00:11:25.403 }, 00:11:25.403 "peer_address": { 00:11:25.403 "trtype": "TCP", 00:11:25.403 "adrfam": "IPv4", 00:11:25.403 "traddr": "10.0.0.1", 00:11:25.403 "trsvcid": "60412" 00:11:25.403 }, 00:11:25.403 "auth": { 00:11:25.403 "state": "completed", 00:11:25.403 "digest": "sha256", 00:11:25.403 "dhgroup": "ffdhe4096" 00:11:25.403 } 00:11:25.403 } 00:11:25.403 ]' 00:11:25.403 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.663 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.663 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.663 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:25.663 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.663 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.663 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.663 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.921 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:11:26.490 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.490 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:26.490 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.490 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.490 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.491 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.491 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:26.491 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.750 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.010 00:11:27.268 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.269 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.269 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.269 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.269 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.269 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.269 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.527 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.527 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.527 { 00:11:27.527 "cntlid": 29, 00:11:27.527 "qid": 0, 00:11:27.527 "state": "enabled", 00:11:27.527 "thread": "nvmf_tgt_poll_group_000", 00:11:27.527 "listen_address": { 00:11:27.527 "trtype": "TCP", 00:11:27.527 "adrfam": "IPv4", 00:11:27.527 "traddr": "10.0.0.2", 00:11:27.527 "trsvcid": "4420" 00:11:27.527 }, 00:11:27.527 "peer_address": { 00:11:27.527 "trtype": "TCP", 00:11:27.527 "adrfam": "IPv4", 00:11:27.527 "traddr": "10.0.0.1", 00:11:27.527 "trsvcid": "60452" 00:11:27.527 }, 00:11:27.527 "auth": { 00:11:27.527 "state": "completed", 00:11:27.527 "digest": "sha256", 00:11:27.527 "dhgroup": "ffdhe4096" 00:11:27.527 } 00:11:27.527 } 00:11:27.527 ]' 00:11:27.527 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.527 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.527 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.527 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:27.527 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.527 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.527 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.527 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.786 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:11:28.353 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.353 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:28.353 12:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.353 12:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.353 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.353 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.353 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:28.353 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:28.919 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:28.919 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.919 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:28.919 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:28.919 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:28.919 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.919 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:11:28.919 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.919 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.920 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.920 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:28.920 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:29.177 00:11:29.177 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.177 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.177 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.435 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.435 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.435 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.435 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.435 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.435 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.435 { 00:11:29.435 "cntlid": 31, 00:11:29.435 "qid": 0, 00:11:29.435 "state": "enabled", 00:11:29.435 "thread": "nvmf_tgt_poll_group_000", 00:11:29.435 "listen_address": { 00:11:29.435 "trtype": "TCP", 00:11:29.435 "adrfam": "IPv4", 00:11:29.435 "traddr": "10.0.0.2", 00:11:29.435 "trsvcid": "4420" 00:11:29.435 }, 00:11:29.435 "peer_address": { 00:11:29.435 "trtype": "TCP", 00:11:29.435 "adrfam": "IPv4", 00:11:29.435 "traddr": "10.0.0.1", 00:11:29.435 "trsvcid": "60482" 00:11:29.435 }, 00:11:29.435 "auth": { 00:11:29.435 "state": "completed", 00:11:29.435 "digest": "sha256", 00:11:29.435 "dhgroup": "ffdhe4096" 00:11:29.435 } 00:11:29.435 } 00:11:29.435 ]' 00:11:29.435 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.435 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.435 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.435 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:29.435 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.435 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.435 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.435 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.774 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:11:30.340 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.340 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:30.340 12:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.340 12:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.340 12:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.340 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.340 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.340 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:30.340 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.906 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.163 00:11:31.163 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.163 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.163 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.421 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.421 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.421 12:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.421 12:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.421 12:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.421 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.421 { 00:11:31.421 "cntlid": 33, 00:11:31.421 "qid": 0, 00:11:31.422 "state": "enabled", 00:11:31.422 "thread": "nvmf_tgt_poll_group_000", 00:11:31.422 "listen_address": { 00:11:31.422 "trtype": "TCP", 00:11:31.422 "adrfam": "IPv4", 00:11:31.422 "traddr": "10.0.0.2", 00:11:31.422 "trsvcid": "4420" 00:11:31.422 }, 00:11:31.422 "peer_address": { 00:11:31.422 "trtype": "TCP", 00:11:31.422 "adrfam": "IPv4", 00:11:31.422 "traddr": "10.0.0.1", 00:11:31.422 "trsvcid": "51636" 00:11:31.422 }, 00:11:31.422 "auth": { 00:11:31.422 "state": "completed", 00:11:31.422 "digest": "sha256", 00:11:31.422 "dhgroup": "ffdhe6144" 00:11:31.422 } 00:11:31.422 } 00:11:31.422 ]' 00:11:31.422 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.680 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.680 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.680 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:31.680 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.680 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.680 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.680 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.939 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:11:32.506 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.506 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:32.506 12:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.506 12:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.506 12:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.506 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.506 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:32.506 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.074 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.333 00:11:33.333 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.333 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.333 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.592 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.592 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.592 12:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.592 12:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.592 12:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.592 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.592 { 00:11:33.592 "cntlid": 35, 00:11:33.592 "qid": 0, 00:11:33.592 "state": "enabled", 00:11:33.592 "thread": "nvmf_tgt_poll_group_000", 00:11:33.592 "listen_address": { 00:11:33.592 "trtype": "TCP", 00:11:33.592 "adrfam": "IPv4", 00:11:33.592 "traddr": "10.0.0.2", 00:11:33.592 "trsvcid": "4420" 00:11:33.592 }, 00:11:33.592 "peer_address": { 00:11:33.592 "trtype": "TCP", 00:11:33.592 "adrfam": "IPv4", 00:11:33.592 "traddr": "10.0.0.1", 00:11:33.592 "trsvcid": "51654" 00:11:33.592 }, 00:11:33.592 "auth": { 00:11:33.592 "state": "completed", 00:11:33.592 "digest": "sha256", 00:11:33.592 "dhgroup": "ffdhe6144" 00:11:33.592 } 00:11:33.592 } 00:11:33.592 ]' 00:11:33.592 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.592 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.592 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.856 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:33.856 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.856 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.856 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.856 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.115 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:11:34.683 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.683 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:34.683 12:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.683 12:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.683 12:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.683 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.684 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:34.684 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.943 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.509 00:11:35.509 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.509 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.509 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.768 { 00:11:35.768 "cntlid": 37, 00:11:35.768 "qid": 0, 00:11:35.768 "state": "enabled", 00:11:35.768 "thread": "nvmf_tgt_poll_group_000", 00:11:35.768 "listen_address": { 00:11:35.768 "trtype": "TCP", 00:11:35.768 "adrfam": "IPv4", 00:11:35.768 "traddr": "10.0.0.2", 00:11:35.768 "trsvcid": "4420" 00:11:35.768 }, 00:11:35.768 "peer_address": { 00:11:35.768 "trtype": "TCP", 00:11:35.768 "adrfam": "IPv4", 00:11:35.768 "traddr": "10.0.0.1", 00:11:35.768 "trsvcid": "51690" 00:11:35.768 }, 00:11:35.768 "auth": { 00:11:35.768 "state": "completed", 00:11:35.768 "digest": "sha256", 00:11:35.768 "dhgroup": "ffdhe6144" 00:11:35.768 } 00:11:35.768 } 00:11:35.768 ]' 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.768 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.027 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:11:36.594 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.594 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:36.594 12:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.594 12:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.594 12:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.594 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.594 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:36.594 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.160 12:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.161 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:37.161 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:37.419 00:11:37.419 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.419 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.419 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.688 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.688 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.688 12:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.688 12:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.688 12:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.688 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.688 { 00:11:37.688 "cntlid": 39, 00:11:37.688 "qid": 0, 00:11:37.688 "state": "enabled", 00:11:37.688 "thread": "nvmf_tgt_poll_group_000", 00:11:37.688 "listen_address": { 00:11:37.688 "trtype": "TCP", 00:11:37.688 "adrfam": "IPv4", 00:11:37.688 "traddr": "10.0.0.2", 00:11:37.688 "trsvcid": "4420" 00:11:37.688 }, 00:11:37.688 "peer_address": { 00:11:37.688 "trtype": "TCP", 00:11:37.688 "adrfam": "IPv4", 00:11:37.688 "traddr": "10.0.0.1", 00:11:37.688 "trsvcid": "51718" 00:11:37.688 }, 00:11:37.688 "auth": { 00:11:37.688 "state": "completed", 00:11:37.688 "digest": "sha256", 00:11:37.688 "dhgroup": "ffdhe6144" 00:11:37.688 } 00:11:37.688 } 00:11:37.688 ]' 00:11:37.688 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.961 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.961 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.961 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:37.961 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.961 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.961 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.961 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.219 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:11:38.785 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.785 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:38.785 12:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.785 12:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.785 12:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.785 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.785 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.785 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:38.785 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:39.043 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:39.043 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.043 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:39.043 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:39.043 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:39.043 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.044 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.044 12:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.044 12:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.044 12:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.044 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.044 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.609 00:11:39.609 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.609 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.609 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.867 { 00:11:39.867 "cntlid": 41, 00:11:39.867 "qid": 0, 00:11:39.867 "state": "enabled", 00:11:39.867 "thread": "nvmf_tgt_poll_group_000", 00:11:39.867 "listen_address": { 00:11:39.867 "trtype": "TCP", 00:11:39.867 "adrfam": "IPv4", 00:11:39.867 "traddr": "10.0.0.2", 00:11:39.867 "trsvcid": "4420" 00:11:39.867 }, 00:11:39.867 "peer_address": { 00:11:39.867 "trtype": "TCP", 00:11:39.867 "adrfam": "IPv4", 00:11:39.867 "traddr": "10.0.0.1", 00:11:39.867 "trsvcid": "51754" 00:11:39.867 }, 00:11:39.867 "auth": { 00:11:39.867 "state": "completed", 00:11:39.867 "digest": "sha256", 00:11:39.867 "dhgroup": "ffdhe8192" 00:11:39.867 } 00:11:39.867 } 00:11:39.867 ]' 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.867 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.125 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:11:41.058 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.058 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:41.058 12:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.058 12:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.058 12:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.058 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.058 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:41.058 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.317 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.884 00:11:41.884 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.884 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.884 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.142 { 00:11:42.142 "cntlid": 43, 00:11:42.142 "qid": 0, 00:11:42.142 "state": "enabled", 00:11:42.142 "thread": "nvmf_tgt_poll_group_000", 00:11:42.142 "listen_address": { 00:11:42.142 "trtype": "TCP", 00:11:42.142 "adrfam": "IPv4", 00:11:42.142 "traddr": "10.0.0.2", 00:11:42.142 "trsvcid": "4420" 00:11:42.142 }, 00:11:42.142 "peer_address": { 00:11:42.142 "trtype": "TCP", 00:11:42.142 "adrfam": "IPv4", 00:11:42.142 "traddr": "10.0.0.1", 00:11:42.142 "trsvcid": "57034" 00:11:42.142 }, 00:11:42.142 "auth": { 00:11:42.142 "state": "completed", 00:11:42.142 "digest": "sha256", 00:11:42.142 "dhgroup": "ffdhe8192" 00:11:42.142 } 00:11:42.142 } 00:11:42.142 ]' 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.142 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.400 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:11:42.966 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.966 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:42.966 12:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.966 12:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.966 12:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.966 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.966 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:42.966 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.224 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.790 00:11:43.790 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.790 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.790 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.049 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.049 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.049 12:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.049 12:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.049 12:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.049 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.049 { 00:11:44.049 "cntlid": 45, 00:11:44.049 "qid": 0, 00:11:44.049 "state": "enabled", 00:11:44.049 "thread": "nvmf_tgt_poll_group_000", 00:11:44.049 "listen_address": { 00:11:44.049 "trtype": "TCP", 00:11:44.049 "adrfam": "IPv4", 00:11:44.049 "traddr": "10.0.0.2", 00:11:44.049 "trsvcid": "4420" 00:11:44.049 }, 00:11:44.049 "peer_address": { 00:11:44.049 "trtype": "TCP", 00:11:44.049 "adrfam": "IPv4", 00:11:44.049 "traddr": "10.0.0.1", 00:11:44.049 "trsvcid": "57056" 00:11:44.049 }, 00:11:44.049 "auth": { 00:11:44.049 "state": "completed", 00:11:44.049 "digest": "sha256", 00:11:44.049 "dhgroup": "ffdhe8192" 00:11:44.049 } 00:11:44.049 } 00:11:44.049 ]' 00:11:44.049 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.335 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.335 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.335 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:44.335 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.335 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.335 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.335 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.594 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:11:45.161 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.161 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:45.161 12:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.161 12:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.161 12:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.161 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.161 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:45.161 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:45.452 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.018 00:11:46.276 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.276 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.276 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.534 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.534 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.534 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.534 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.534 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.534 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:46.534 { 00:11:46.534 "cntlid": 47, 00:11:46.534 "qid": 0, 00:11:46.534 "state": "enabled", 00:11:46.534 "thread": "nvmf_tgt_poll_group_000", 00:11:46.534 "listen_address": { 00:11:46.534 "trtype": "TCP", 00:11:46.534 "adrfam": "IPv4", 00:11:46.534 "traddr": "10.0.0.2", 00:11:46.534 "trsvcid": "4420" 00:11:46.534 }, 00:11:46.534 "peer_address": { 00:11:46.534 "trtype": "TCP", 00:11:46.534 "adrfam": "IPv4", 00:11:46.534 "traddr": "10.0.0.1", 00:11:46.534 "trsvcid": "57068" 00:11:46.534 }, 00:11:46.534 "auth": { 00:11:46.534 "state": "completed", 00:11:46.534 "digest": "sha256", 00:11:46.534 "dhgroup": "ffdhe8192" 00:11:46.534 } 00:11:46.534 } 00:11:46.534 ]' 00:11:46.534 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:46.534 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.534 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:46.534 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:46.534 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:46.534 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.534 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.534 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.792 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.727 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.985 00:11:47.985 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.985 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.985 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.243 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.243 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.243 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.243 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.243 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.243 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.243 { 00:11:48.243 "cntlid": 49, 00:11:48.243 "qid": 0, 00:11:48.243 "state": "enabled", 00:11:48.243 "thread": "nvmf_tgt_poll_group_000", 00:11:48.243 "listen_address": { 00:11:48.243 "trtype": "TCP", 00:11:48.243 "adrfam": "IPv4", 00:11:48.243 "traddr": "10.0.0.2", 00:11:48.243 "trsvcid": "4420" 00:11:48.243 }, 00:11:48.243 "peer_address": { 00:11:48.243 "trtype": "TCP", 00:11:48.243 "adrfam": "IPv4", 00:11:48.243 "traddr": "10.0.0.1", 00:11:48.243 "trsvcid": "57104" 00:11:48.243 }, 00:11:48.243 "auth": { 00:11:48.243 "state": "completed", 00:11:48.243 "digest": "sha384", 00:11:48.243 "dhgroup": "null" 00:11:48.243 } 00:11:48.243 } 00:11:48.243 ]' 00:11:48.243 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.501 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.501 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.501 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:48.501 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.501 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.501 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.501 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.758 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:11:49.324 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.582 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.840 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.840 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.840 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.098 00:11:50.099 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.099 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.099 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.357 { 00:11:50.357 "cntlid": 51, 00:11:50.357 "qid": 0, 00:11:50.357 "state": "enabled", 00:11:50.357 "thread": "nvmf_tgt_poll_group_000", 00:11:50.357 "listen_address": { 00:11:50.357 "trtype": "TCP", 00:11:50.357 "adrfam": "IPv4", 00:11:50.357 "traddr": "10.0.0.2", 00:11:50.357 "trsvcid": "4420" 00:11:50.357 }, 00:11:50.357 "peer_address": { 00:11:50.357 "trtype": "TCP", 00:11:50.357 "adrfam": "IPv4", 00:11:50.357 "traddr": "10.0.0.1", 00:11:50.357 "trsvcid": "38114" 00:11:50.357 }, 00:11:50.357 "auth": { 00:11:50.357 "state": "completed", 00:11:50.357 "digest": "sha384", 00:11:50.357 "dhgroup": "null" 00:11:50.357 } 00:11:50.357 } 00:11:50.357 ]' 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:50.357 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.357 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.357 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.357 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.923 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:11:51.488 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.488 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:51.488 12:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.488 12:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.488 12:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.488 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.488 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:51.488 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.746 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.004 00:11:52.004 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.004 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.004 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.261 { 00:11:52.261 "cntlid": 53, 00:11:52.261 "qid": 0, 00:11:52.261 "state": "enabled", 00:11:52.261 "thread": "nvmf_tgt_poll_group_000", 00:11:52.261 "listen_address": { 00:11:52.261 "trtype": "TCP", 00:11:52.261 "adrfam": "IPv4", 00:11:52.261 "traddr": "10.0.0.2", 00:11:52.261 "trsvcid": "4420" 00:11:52.261 }, 00:11:52.261 "peer_address": { 00:11:52.261 "trtype": "TCP", 00:11:52.261 "adrfam": "IPv4", 00:11:52.261 "traddr": "10.0.0.1", 00:11:52.261 "trsvcid": "38132" 00:11:52.261 }, 00:11:52.261 "auth": { 00:11:52.261 "state": "completed", 00:11:52.261 "digest": "sha384", 00:11:52.261 "dhgroup": "null" 00:11:52.261 } 00:11:52.261 } 00:11:52.261 ]' 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:52.261 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.519 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.519 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.519 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.777 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:11:53.343 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.343 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:53.343 12:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.343 12:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.343 12:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.343 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.343 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:53.343 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:53.602 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:53.859 00:11:53.860 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.860 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.860 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.437 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.437 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.437 12:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.437 12:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.437 12:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.437 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.437 { 00:11:54.437 "cntlid": 55, 00:11:54.437 "qid": 0, 00:11:54.437 "state": "enabled", 00:11:54.437 "thread": "nvmf_tgt_poll_group_000", 00:11:54.437 "listen_address": { 00:11:54.437 "trtype": "TCP", 00:11:54.437 "adrfam": "IPv4", 00:11:54.437 "traddr": "10.0.0.2", 00:11:54.437 "trsvcid": "4420" 00:11:54.437 }, 00:11:54.437 "peer_address": { 00:11:54.437 "trtype": "TCP", 00:11:54.437 "adrfam": "IPv4", 00:11:54.437 "traddr": "10.0.0.1", 00:11:54.437 "trsvcid": "38154" 00:11:54.437 }, 00:11:54.437 "auth": { 00:11:54.437 "state": "completed", 00:11:54.437 "digest": "sha384", 00:11:54.437 "dhgroup": "null" 00:11:54.437 } 00:11:54.437 } 00:11:54.437 ]' 00:11:54.437 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.437 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:54.437 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.438 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:54.438 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.438 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.438 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.438 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.696 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:11:55.331 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.331 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:55.331 12:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.331 12:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.331 12:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.331 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:55.331 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:55.331 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:55.331 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:55.589 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:55.589 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.589 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:55.589 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:55.589 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:55.589 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.589 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.589 12:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.590 12:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.590 12:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.590 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.590 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.157 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.157 { 00:11:56.157 "cntlid": 57, 00:11:56.157 "qid": 0, 00:11:56.157 "state": "enabled", 00:11:56.157 "thread": "nvmf_tgt_poll_group_000", 00:11:56.157 "listen_address": { 00:11:56.157 "trtype": "TCP", 00:11:56.157 "adrfam": "IPv4", 00:11:56.157 "traddr": "10.0.0.2", 00:11:56.157 "trsvcid": "4420" 00:11:56.157 }, 00:11:56.157 "peer_address": { 00:11:56.157 "trtype": "TCP", 00:11:56.157 "adrfam": "IPv4", 00:11:56.157 "traddr": "10.0.0.1", 00:11:56.157 "trsvcid": "38186" 00:11:56.157 }, 00:11:56.157 "auth": { 00:11:56.157 "state": "completed", 00:11:56.157 "digest": "sha384", 00:11:56.157 "dhgroup": "ffdhe2048" 00:11:56.157 } 00:11:56.157 } 00:11:56.157 ]' 00:11:56.157 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.416 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.416 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.416 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:56.416 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.416 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.416 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.416 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.674 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:11:57.239 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.239 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:57.239 12:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.239 12:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.239 12:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.239 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.239 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:57.239 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.498 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.757 00:11:57.757 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.757 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.757 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.016 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.016 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.016 12:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.016 12:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.016 12:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.016 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.016 { 00:11:58.016 "cntlid": 59, 00:11:58.016 "qid": 0, 00:11:58.016 "state": "enabled", 00:11:58.016 "thread": "nvmf_tgt_poll_group_000", 00:11:58.016 "listen_address": { 00:11:58.016 "trtype": "TCP", 00:11:58.016 "adrfam": "IPv4", 00:11:58.016 "traddr": "10.0.0.2", 00:11:58.016 "trsvcid": "4420" 00:11:58.016 }, 00:11:58.016 "peer_address": { 00:11:58.016 "trtype": "TCP", 00:11:58.016 "adrfam": "IPv4", 00:11:58.016 "traddr": "10.0.0.1", 00:11:58.016 "trsvcid": "38206" 00:11:58.016 }, 00:11:58.016 "auth": { 00:11:58.016 "state": "completed", 00:11:58.016 "digest": "sha384", 00:11:58.016 "dhgroup": "ffdhe2048" 00:11:58.016 } 00:11:58.016 } 00:11:58.016 ]' 00:11:58.016 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.016 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.016 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.274 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:58.274 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.274 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.274 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.274 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.532 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:11:59.099 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.099 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:11:59.099 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.099 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.099 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.099 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.099 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:59.099 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.358 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.616 00:11:59.616 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.616 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.616 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.875 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.875 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.875 12:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.875 12:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 12:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.875 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.875 { 00:11:59.875 "cntlid": 61, 00:11:59.875 "qid": 0, 00:11:59.875 "state": "enabled", 00:11:59.875 "thread": "nvmf_tgt_poll_group_000", 00:11:59.875 "listen_address": { 00:11:59.875 "trtype": "TCP", 00:11:59.875 "adrfam": "IPv4", 00:11:59.875 "traddr": "10.0.0.2", 00:11:59.875 "trsvcid": "4420" 00:11:59.875 }, 00:11:59.875 "peer_address": { 00:11:59.875 "trtype": "TCP", 00:11:59.875 "adrfam": "IPv4", 00:11:59.875 "traddr": "10.0.0.1", 00:11:59.875 "trsvcid": "45214" 00:11:59.875 }, 00:11:59.875 "auth": { 00:11:59.875 "state": "completed", 00:11:59.875 "digest": "sha384", 00:11:59.875 "dhgroup": "ffdhe2048" 00:11:59.875 } 00:11:59.875 } 00:11:59.875 ]' 00:11:59.875 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.875 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.875 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.133 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:00.133 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.133 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.133 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.133 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.392 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:12:00.969 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.969 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:00.969 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.969 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.969 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.969 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.969 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:00.969 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:01.226 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:01.226 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.226 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.227 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:01.227 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:01.227 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.227 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:12:01.227 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.227 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.227 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.227 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.227 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.484 00:12:01.484 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.484 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.484 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.742 { 00:12:01.742 "cntlid": 63, 00:12:01.742 "qid": 0, 00:12:01.742 "state": "enabled", 00:12:01.742 "thread": "nvmf_tgt_poll_group_000", 00:12:01.742 "listen_address": { 00:12:01.742 "trtype": "TCP", 00:12:01.742 "adrfam": "IPv4", 00:12:01.742 "traddr": "10.0.0.2", 00:12:01.742 "trsvcid": "4420" 00:12:01.742 }, 00:12:01.742 "peer_address": { 00:12:01.742 "trtype": "TCP", 00:12:01.742 "adrfam": "IPv4", 00:12:01.742 "traddr": "10.0.0.1", 00:12:01.742 "trsvcid": "45236" 00:12:01.742 }, 00:12:01.742 "auth": { 00:12:01.742 "state": "completed", 00:12:01.742 "digest": "sha384", 00:12:01.742 "dhgroup": "ffdhe2048" 00:12:01.742 } 00:12:01.742 } 00:12:01.742 ]' 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:01.742 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.000 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.000 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.000 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.258 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:12:02.823 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.823 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:02.823 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.823 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.823 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.823 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:02.823 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.823 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:02.823 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.082 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.340 00:12:03.340 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.340 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.340 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.598 { 00:12:03.598 "cntlid": 65, 00:12:03.598 "qid": 0, 00:12:03.598 "state": "enabled", 00:12:03.598 "thread": "nvmf_tgt_poll_group_000", 00:12:03.598 "listen_address": { 00:12:03.598 "trtype": "TCP", 00:12:03.598 "adrfam": "IPv4", 00:12:03.598 "traddr": "10.0.0.2", 00:12:03.598 "trsvcid": "4420" 00:12:03.598 }, 00:12:03.598 "peer_address": { 00:12:03.598 "trtype": "TCP", 00:12:03.598 "adrfam": "IPv4", 00:12:03.598 "traddr": "10.0.0.1", 00:12:03.598 "trsvcid": "45258" 00:12:03.598 }, 00:12:03.598 "auth": { 00:12:03.598 "state": "completed", 00:12:03.598 "digest": "sha384", 00:12:03.598 "dhgroup": "ffdhe3072" 00:12:03.598 } 00:12:03.598 } 00:12:03.598 ]' 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:03.598 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.856 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.856 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.856 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.115 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:12:04.682 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.682 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:04.682 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.682 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.682 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.682 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.682 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:04.682 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.941 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.199 00:12:05.199 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.199 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.199 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.459 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.459 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.459 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.459 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.459 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.459 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.459 { 00:12:05.459 "cntlid": 67, 00:12:05.459 "qid": 0, 00:12:05.459 "state": "enabled", 00:12:05.459 "thread": "nvmf_tgt_poll_group_000", 00:12:05.459 "listen_address": { 00:12:05.459 "trtype": "TCP", 00:12:05.459 "adrfam": "IPv4", 00:12:05.459 "traddr": "10.0.0.2", 00:12:05.459 "trsvcid": "4420" 00:12:05.459 }, 00:12:05.459 "peer_address": { 00:12:05.459 "trtype": "TCP", 00:12:05.459 "adrfam": "IPv4", 00:12:05.459 "traddr": "10.0.0.1", 00:12:05.459 "trsvcid": "45288" 00:12:05.459 }, 00:12:05.459 "auth": { 00:12:05.459 "state": "completed", 00:12:05.459 "digest": "sha384", 00:12:05.459 "dhgroup": "ffdhe3072" 00:12:05.459 } 00:12:05.459 } 00:12:05.459 ]' 00:12:05.459 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.459 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.459 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.459 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.459 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.459 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.459 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.459 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.718 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:12:06.285 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.285 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:06.285 12:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.285 12:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.559 12:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.559 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.559 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:06.559 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.818 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.076 00:12:07.076 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.076 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.076 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.342 { 00:12:07.342 "cntlid": 69, 00:12:07.342 "qid": 0, 00:12:07.342 "state": "enabled", 00:12:07.342 "thread": "nvmf_tgt_poll_group_000", 00:12:07.342 "listen_address": { 00:12:07.342 "trtype": "TCP", 00:12:07.342 "adrfam": "IPv4", 00:12:07.342 "traddr": "10.0.0.2", 00:12:07.342 "trsvcid": "4420" 00:12:07.342 }, 00:12:07.342 "peer_address": { 00:12:07.342 "trtype": "TCP", 00:12:07.342 "adrfam": "IPv4", 00:12:07.342 "traddr": "10.0.0.1", 00:12:07.342 "trsvcid": "45316" 00:12:07.342 }, 00:12:07.342 "auth": { 00:12:07.342 "state": "completed", 00:12:07.342 "digest": "sha384", 00:12:07.342 "dhgroup": "ffdhe3072" 00:12:07.342 } 00:12:07.342 } 00:12:07.342 ]' 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:07.342 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.342 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.342 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.342 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.916 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:12:08.482 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.482 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:08.482 12:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.482 12:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.482 12:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.482 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.482 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:08.482 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.740 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.998 00:12:08.998 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.998 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.998 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.566 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.566 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.566 12:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.566 12:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.566 12:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.566 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.566 { 00:12:09.566 "cntlid": 71, 00:12:09.566 "qid": 0, 00:12:09.566 "state": "enabled", 00:12:09.566 "thread": "nvmf_tgt_poll_group_000", 00:12:09.566 "listen_address": { 00:12:09.566 "trtype": "TCP", 00:12:09.566 "adrfam": "IPv4", 00:12:09.566 "traddr": "10.0.0.2", 00:12:09.566 "trsvcid": "4420" 00:12:09.566 }, 00:12:09.566 "peer_address": { 00:12:09.566 "trtype": "TCP", 00:12:09.566 "adrfam": "IPv4", 00:12:09.566 "traddr": "10.0.0.1", 00:12:09.566 "trsvcid": "45352" 00:12:09.566 }, 00:12:09.566 "auth": { 00:12:09.566 "state": "completed", 00:12:09.566 "digest": "sha384", 00:12:09.566 "dhgroup": "ffdhe3072" 00:12:09.566 } 00:12:09.566 } 00:12:09.566 ]' 00:12:09.566 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.566 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.566 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.566 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:09.566 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.566 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.566 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.566 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.825 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:12:10.391 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.391 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:10.391 12:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.391 12:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.391 12:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.391 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.391 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.391 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:10.391 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.649 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.214 00:12:11.214 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.214 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.214 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.472 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.472 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.472 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.472 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.472 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.472 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.472 { 00:12:11.472 "cntlid": 73, 00:12:11.472 "qid": 0, 00:12:11.472 "state": "enabled", 00:12:11.472 "thread": "nvmf_tgt_poll_group_000", 00:12:11.472 "listen_address": { 00:12:11.472 "trtype": "TCP", 00:12:11.472 "adrfam": "IPv4", 00:12:11.472 "traddr": "10.0.0.2", 00:12:11.472 "trsvcid": "4420" 00:12:11.472 }, 00:12:11.472 "peer_address": { 00:12:11.472 "trtype": "TCP", 00:12:11.472 "adrfam": "IPv4", 00:12:11.472 "traddr": "10.0.0.1", 00:12:11.472 "trsvcid": "42888" 00:12:11.472 }, 00:12:11.472 "auth": { 00:12:11.472 "state": "completed", 00:12:11.472 "digest": "sha384", 00:12:11.472 "dhgroup": "ffdhe4096" 00:12:11.472 } 00:12:11.472 } 00:12:11.472 ]' 00:12:11.472 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.472 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.472 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.472 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:11.472 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.472 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.472 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.472 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.755 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:12:12.688 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.688 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:12.688 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.688 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.688 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.688 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.688 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:12.688 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.945 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.203 00:12:13.203 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.203 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.203 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.461 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.461 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.461 12:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.461 12:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.461 12:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.461 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.461 { 00:12:13.461 "cntlid": 75, 00:12:13.461 "qid": 0, 00:12:13.461 "state": "enabled", 00:12:13.461 "thread": "nvmf_tgt_poll_group_000", 00:12:13.461 "listen_address": { 00:12:13.461 "trtype": "TCP", 00:12:13.461 "adrfam": "IPv4", 00:12:13.461 "traddr": "10.0.0.2", 00:12:13.461 "trsvcid": "4420" 00:12:13.461 }, 00:12:13.461 "peer_address": { 00:12:13.461 "trtype": "TCP", 00:12:13.461 "adrfam": "IPv4", 00:12:13.461 "traddr": "10.0.0.1", 00:12:13.461 "trsvcid": "42912" 00:12:13.461 }, 00:12:13.461 "auth": { 00:12:13.461 "state": "completed", 00:12:13.461 "digest": "sha384", 00:12:13.461 "dhgroup": "ffdhe4096" 00:12:13.461 } 00:12:13.461 } 00:12:13.461 ]' 00:12:13.461 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.719 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.719 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.719 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:13.719 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.719 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.719 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.719 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.977 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:12:14.543 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.543 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:14.543 12:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.543 12:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.543 12:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.543 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.543 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:14.543 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.801 12:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.802 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.802 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.368 00:12:15.368 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.368 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.368 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.627 { 00:12:15.627 "cntlid": 77, 00:12:15.627 "qid": 0, 00:12:15.627 "state": "enabled", 00:12:15.627 "thread": "nvmf_tgt_poll_group_000", 00:12:15.627 "listen_address": { 00:12:15.627 "trtype": "TCP", 00:12:15.627 "adrfam": "IPv4", 00:12:15.627 "traddr": "10.0.0.2", 00:12:15.627 "trsvcid": "4420" 00:12:15.627 }, 00:12:15.627 "peer_address": { 00:12:15.627 "trtype": "TCP", 00:12:15.627 "adrfam": "IPv4", 00:12:15.627 "traddr": "10.0.0.1", 00:12:15.627 "trsvcid": "42954" 00:12:15.627 }, 00:12:15.627 "auth": { 00:12:15.627 "state": "completed", 00:12:15.627 "digest": "sha384", 00:12:15.627 "dhgroup": "ffdhe4096" 00:12:15.627 } 00:12:15.627 } 00:12:15.627 ]' 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.627 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.886 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:12:16.453 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.453 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:16.453 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.453 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.453 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.453 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.453 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:16.453 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.711 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.712 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.712 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.970 00:12:17.229 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.229 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.229 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.488 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.488 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.488 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.488 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.488 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.488 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.488 { 00:12:17.488 "cntlid": 79, 00:12:17.488 "qid": 0, 00:12:17.488 "state": "enabled", 00:12:17.488 "thread": "nvmf_tgt_poll_group_000", 00:12:17.488 "listen_address": { 00:12:17.488 "trtype": "TCP", 00:12:17.488 "adrfam": "IPv4", 00:12:17.488 "traddr": "10.0.0.2", 00:12:17.488 "trsvcid": "4420" 00:12:17.488 }, 00:12:17.488 "peer_address": { 00:12:17.488 "trtype": "TCP", 00:12:17.488 "adrfam": "IPv4", 00:12:17.488 "traddr": "10.0.0.1", 00:12:17.488 "trsvcid": "42976" 00:12:17.488 }, 00:12:17.488 "auth": { 00:12:17.488 "state": "completed", 00:12:17.488 "digest": "sha384", 00:12:17.488 "dhgroup": "ffdhe4096" 00:12:17.488 } 00:12:17.488 } 00:12:17.488 ]' 00:12:17.488 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.488 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.488 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.488 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:17.488 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.488 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.488 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.488 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.747 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.684 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.251 00:12:19.251 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.251 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.251 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.510 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.510 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.510 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.510 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.510 { 00:12:19.510 "cntlid": 81, 00:12:19.510 "qid": 0, 00:12:19.510 "state": "enabled", 00:12:19.510 "thread": "nvmf_tgt_poll_group_000", 00:12:19.510 "listen_address": { 00:12:19.510 "trtype": "TCP", 00:12:19.510 "adrfam": "IPv4", 00:12:19.510 "traddr": "10.0.0.2", 00:12:19.510 "trsvcid": "4420" 00:12:19.510 }, 00:12:19.510 "peer_address": { 00:12:19.510 "trtype": "TCP", 00:12:19.510 "adrfam": "IPv4", 00:12:19.510 "traddr": "10.0.0.1", 00:12:19.510 "trsvcid": "42998" 00:12:19.510 }, 00:12:19.510 "auth": { 00:12:19.510 "state": "completed", 00:12:19.510 "digest": "sha384", 00:12:19.510 "dhgroup": "ffdhe6144" 00:12:19.510 } 00:12:19.510 } 00:12:19.510 ]' 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.510 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.078 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:12:20.645 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.646 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:20.646 12:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.646 12:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.646 12:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.646 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.646 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.646 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.904 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.162 00:12:21.162 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.162 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.162 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.420 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.420 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.420 12:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.420 12:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.420 12:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.420 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.420 { 00:12:21.420 "cntlid": 83, 00:12:21.420 "qid": 0, 00:12:21.420 "state": "enabled", 00:12:21.420 "thread": "nvmf_tgt_poll_group_000", 00:12:21.420 "listen_address": { 00:12:21.420 "trtype": "TCP", 00:12:21.420 "adrfam": "IPv4", 00:12:21.420 "traddr": "10.0.0.2", 00:12:21.420 "trsvcid": "4420" 00:12:21.420 }, 00:12:21.420 "peer_address": { 00:12:21.420 "trtype": "TCP", 00:12:21.420 "adrfam": "IPv4", 00:12:21.420 "traddr": "10.0.0.1", 00:12:21.420 "trsvcid": "51378" 00:12:21.420 }, 00:12:21.420 "auth": { 00:12:21.420 "state": "completed", 00:12:21.420 "digest": "sha384", 00:12:21.420 "dhgroup": "ffdhe6144" 00:12:21.420 } 00:12:21.420 } 00:12:21.420 ]' 00:12:21.420 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.677 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.678 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.678 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.678 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.678 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.678 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.678 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.936 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:12:22.505 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.505 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:22.505 12:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.505 12:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.505 12:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.505 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.505 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.505 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.764 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.331 00:12:23.331 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.331 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.331 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.591 { 00:12:23.591 "cntlid": 85, 00:12:23.591 "qid": 0, 00:12:23.591 "state": "enabled", 00:12:23.591 "thread": "nvmf_tgt_poll_group_000", 00:12:23.591 "listen_address": { 00:12:23.591 "trtype": "TCP", 00:12:23.591 "adrfam": "IPv4", 00:12:23.591 "traddr": "10.0.0.2", 00:12:23.591 "trsvcid": "4420" 00:12:23.591 }, 00:12:23.591 "peer_address": { 00:12:23.591 "trtype": "TCP", 00:12:23.591 "adrfam": "IPv4", 00:12:23.591 "traddr": "10.0.0.1", 00:12:23.591 "trsvcid": "51410" 00:12:23.591 }, 00:12:23.591 "auth": { 00:12:23.591 "state": "completed", 00:12:23.591 "digest": "sha384", 00:12:23.591 "dhgroup": "ffdhe6144" 00:12:23.591 } 00:12:23.591 } 00:12:23.591 ]' 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:23.591 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.849 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.849 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.849 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.107 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:12:24.674 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.674 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:24.674 12:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.674 12:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.674 12:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.674 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.674 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:24.674 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.932 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:25.501 00:12:25.501 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.501 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.501 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.771 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.772 { 00:12:25.772 "cntlid": 87, 00:12:25.772 "qid": 0, 00:12:25.772 "state": "enabled", 00:12:25.772 "thread": "nvmf_tgt_poll_group_000", 00:12:25.772 "listen_address": { 00:12:25.772 "trtype": "TCP", 00:12:25.772 "adrfam": "IPv4", 00:12:25.772 "traddr": "10.0.0.2", 00:12:25.772 "trsvcid": "4420" 00:12:25.772 }, 00:12:25.772 "peer_address": { 00:12:25.772 "trtype": "TCP", 00:12:25.772 "adrfam": "IPv4", 00:12:25.772 "traddr": "10.0.0.1", 00:12:25.772 "trsvcid": "51438" 00:12:25.772 }, 00:12:25.772 "auth": { 00:12:25.772 "state": "completed", 00:12:25.772 "digest": "sha384", 00:12:25.772 "dhgroup": "ffdhe6144" 00:12:25.772 } 00:12:25.772 } 00:12:25.772 ]' 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.772 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.043 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.979 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.546 00:12:27.546 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.546 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.546 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.804 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.804 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.804 12:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.804 12:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.804 12:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.804 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.804 { 00:12:27.804 "cntlid": 89, 00:12:27.804 "qid": 0, 00:12:27.804 "state": "enabled", 00:12:27.804 "thread": "nvmf_tgt_poll_group_000", 00:12:27.804 "listen_address": { 00:12:27.804 "trtype": "TCP", 00:12:27.804 "adrfam": "IPv4", 00:12:27.804 "traddr": "10.0.0.2", 00:12:27.804 "trsvcid": "4420" 00:12:27.804 }, 00:12:27.804 "peer_address": { 00:12:27.804 "trtype": "TCP", 00:12:27.804 "adrfam": "IPv4", 00:12:27.804 "traddr": "10.0.0.1", 00:12:27.804 "trsvcid": "51446" 00:12:27.804 }, 00:12:27.804 "auth": { 00:12:27.804 "state": "completed", 00:12:27.804 "digest": "sha384", 00:12:27.804 "dhgroup": "ffdhe8192" 00:12:27.804 } 00:12:27.804 } 00:12:27.804 ]' 00:12:27.804 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.063 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.063 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.063 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.063 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.063 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.063 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.063 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.321 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:12:29.254 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.254 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:29.254 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.254 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.254 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.254 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.254 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.254 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.512 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.079 00:12:30.079 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.079 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.079 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.337 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.337 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.337 12:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.337 12:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.338 { 00:12:30.338 "cntlid": 91, 00:12:30.338 "qid": 0, 00:12:30.338 "state": "enabled", 00:12:30.338 "thread": "nvmf_tgt_poll_group_000", 00:12:30.338 "listen_address": { 00:12:30.338 "trtype": "TCP", 00:12:30.338 "adrfam": "IPv4", 00:12:30.338 "traddr": "10.0.0.2", 00:12:30.338 "trsvcid": "4420" 00:12:30.338 }, 00:12:30.338 "peer_address": { 00:12:30.338 "trtype": "TCP", 00:12:30.338 "adrfam": "IPv4", 00:12:30.338 "traddr": "10.0.0.1", 00:12:30.338 "trsvcid": "38188" 00:12:30.338 }, 00:12:30.338 "auth": { 00:12:30.338 "state": "completed", 00:12:30.338 "digest": "sha384", 00:12:30.338 "dhgroup": "ffdhe8192" 00:12:30.338 } 00:12:30.338 } 00:12:30.338 ]' 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.338 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.596 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:12:31.556 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.556 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:31.556 12:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.556 12:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.556 12:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.556 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.556 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:31.556 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:31.556 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:31.556 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.556 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:31.556 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:31.556 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:31.556 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.556 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.556 12:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.556 12:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.814 12:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.814 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.814 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.381 00:12:32.381 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.381 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.381 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.640 { 00:12:32.640 "cntlid": 93, 00:12:32.640 "qid": 0, 00:12:32.640 "state": "enabled", 00:12:32.640 "thread": "nvmf_tgt_poll_group_000", 00:12:32.640 "listen_address": { 00:12:32.640 "trtype": "TCP", 00:12:32.640 "adrfam": "IPv4", 00:12:32.640 "traddr": "10.0.0.2", 00:12:32.640 "trsvcid": "4420" 00:12:32.640 }, 00:12:32.640 "peer_address": { 00:12:32.640 "trtype": "TCP", 00:12:32.640 "adrfam": "IPv4", 00:12:32.640 "traddr": "10.0.0.1", 00:12:32.640 "trsvcid": "38210" 00:12:32.640 }, 00:12:32.640 "auth": { 00:12:32.640 "state": "completed", 00:12:32.640 "digest": "sha384", 00:12:32.640 "dhgroup": "ffdhe8192" 00:12:32.640 } 00:12:32.640 } 00:12:32.640 ]' 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.640 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.899 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.836 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.094 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.094 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.094 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.659 00:12:34.659 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.659 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.659 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.918 { 00:12:34.918 "cntlid": 95, 00:12:34.918 "qid": 0, 00:12:34.918 "state": "enabled", 00:12:34.918 "thread": "nvmf_tgt_poll_group_000", 00:12:34.918 "listen_address": { 00:12:34.918 "trtype": "TCP", 00:12:34.918 "adrfam": "IPv4", 00:12:34.918 "traddr": "10.0.0.2", 00:12:34.918 "trsvcid": "4420" 00:12:34.918 }, 00:12:34.918 "peer_address": { 00:12:34.918 "trtype": "TCP", 00:12:34.918 "adrfam": "IPv4", 00:12:34.918 "traddr": "10.0.0.1", 00:12:34.918 "trsvcid": "38234" 00:12:34.918 }, 00:12:34.918 "auth": { 00:12:34.918 "state": "completed", 00:12:34.918 "digest": "sha384", 00:12:34.918 "dhgroup": "ffdhe8192" 00:12:34.918 } 00:12:34.918 } 00:12:34.918 ]' 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.918 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.484 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:36.050 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.309 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.669 00:12:36.669 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.669 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.669 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.927 { 00:12:36.927 "cntlid": 97, 00:12:36.927 "qid": 0, 00:12:36.927 "state": "enabled", 00:12:36.927 "thread": "nvmf_tgt_poll_group_000", 00:12:36.927 "listen_address": { 00:12:36.927 "trtype": "TCP", 00:12:36.927 "adrfam": "IPv4", 00:12:36.927 "traddr": "10.0.0.2", 00:12:36.927 "trsvcid": "4420" 00:12:36.927 }, 00:12:36.927 "peer_address": { 00:12:36.927 "trtype": "TCP", 00:12:36.927 "adrfam": "IPv4", 00:12:36.927 "traddr": "10.0.0.1", 00:12:36.927 "trsvcid": "38262" 00:12:36.927 }, 00:12:36.927 "auth": { 00:12:36.927 "state": "completed", 00:12:36.927 "digest": "sha512", 00:12:36.927 "dhgroup": "null" 00:12:36.927 } 00:12:36.927 } 00:12:36.927 ]' 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.927 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.186 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:12:37.750 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.750 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:37.750 12:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.750 12:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.007 12:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.007 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.007 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:38.007 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.263 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.520 00:12:38.520 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.520 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.520 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.777 { 00:12:38.777 "cntlid": 99, 00:12:38.777 "qid": 0, 00:12:38.777 "state": "enabled", 00:12:38.777 "thread": "nvmf_tgt_poll_group_000", 00:12:38.777 "listen_address": { 00:12:38.777 "trtype": "TCP", 00:12:38.777 "adrfam": "IPv4", 00:12:38.777 "traddr": "10.0.0.2", 00:12:38.777 "trsvcid": "4420" 00:12:38.777 }, 00:12:38.777 "peer_address": { 00:12:38.777 "trtype": "TCP", 00:12:38.777 "adrfam": "IPv4", 00:12:38.777 "traddr": "10.0.0.1", 00:12:38.777 "trsvcid": "38276" 00:12:38.777 }, 00:12:38.777 "auth": { 00:12:38.777 "state": "completed", 00:12:38.777 "digest": "sha512", 00:12:38.777 "dhgroup": "null" 00:12:38.777 } 00:12:38.777 } 00:12:38.777 ]' 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:38.777 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.034 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.034 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.034 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.291 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:12:39.855 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.855 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:39.855 12:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.855 12:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.855 12:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.855 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.855 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:39.855 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.419 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.677 00:12:40.677 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.677 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.677 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.934 { 00:12:40.934 "cntlid": 101, 00:12:40.934 "qid": 0, 00:12:40.934 "state": "enabled", 00:12:40.934 "thread": "nvmf_tgt_poll_group_000", 00:12:40.934 "listen_address": { 00:12:40.934 "trtype": "TCP", 00:12:40.934 "adrfam": "IPv4", 00:12:40.934 "traddr": "10.0.0.2", 00:12:40.934 "trsvcid": "4420" 00:12:40.934 }, 00:12:40.934 "peer_address": { 00:12:40.934 "trtype": "TCP", 00:12:40.934 "adrfam": "IPv4", 00:12:40.934 "traddr": "10.0.0.1", 00:12:40.934 "trsvcid": "42924" 00:12:40.934 }, 00:12:40.934 "auth": { 00:12:40.934 "state": "completed", 00:12:40.934 "digest": "sha512", 00:12:40.934 "dhgroup": "null" 00:12:40.934 } 00:12:40.934 } 00:12:40.934 ]' 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:40.934 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.192 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.192 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.192 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.450 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:12:42.016 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.016 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:42.016 12:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.016 12:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.016 12:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.016 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.016 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:42.016 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.583 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.842 00:12:42.842 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.842 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.842 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.100 { 00:12:43.100 "cntlid": 103, 00:12:43.100 "qid": 0, 00:12:43.100 "state": "enabled", 00:12:43.100 "thread": "nvmf_tgt_poll_group_000", 00:12:43.100 "listen_address": { 00:12:43.100 "trtype": "TCP", 00:12:43.100 "adrfam": "IPv4", 00:12:43.100 "traddr": "10.0.0.2", 00:12:43.100 "trsvcid": "4420" 00:12:43.100 }, 00:12:43.100 "peer_address": { 00:12:43.100 "trtype": "TCP", 00:12:43.100 "adrfam": "IPv4", 00:12:43.100 "traddr": "10.0.0.1", 00:12:43.100 "trsvcid": "42938" 00:12:43.100 }, 00:12:43.100 "auth": { 00:12:43.100 "state": "completed", 00:12:43.100 "digest": "sha512", 00:12:43.100 "dhgroup": "null" 00:12:43.100 } 00:12:43.100 } 00:12:43.100 ]' 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.100 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.667 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:12:44.238 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.238 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:44.238 12:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.238 12:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.238 12:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.238 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.238 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.238 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:44.238 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.498 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.065 00:12:45.065 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.065 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.065 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.324 { 00:12:45.324 "cntlid": 105, 00:12:45.324 "qid": 0, 00:12:45.324 "state": "enabled", 00:12:45.324 "thread": "nvmf_tgt_poll_group_000", 00:12:45.324 "listen_address": { 00:12:45.324 "trtype": "TCP", 00:12:45.324 "adrfam": "IPv4", 00:12:45.324 "traddr": "10.0.0.2", 00:12:45.324 "trsvcid": "4420" 00:12:45.324 }, 00:12:45.324 "peer_address": { 00:12:45.324 "trtype": "TCP", 00:12:45.324 "adrfam": "IPv4", 00:12:45.324 "traddr": "10.0.0.1", 00:12:45.324 "trsvcid": "42972" 00:12:45.324 }, 00:12:45.324 "auth": { 00:12:45.324 "state": "completed", 00:12:45.324 "digest": "sha512", 00:12:45.324 "dhgroup": "ffdhe2048" 00:12:45.324 } 00:12:45.324 } 00:12:45.324 ]' 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:45.324 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.584 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.584 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.584 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.842 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:12:46.407 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.407 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:46.407 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.407 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.407 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.407 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.407 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:46.407 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:46.664 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:46.664 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.664 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:46.664 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:46.664 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:46.664 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.664 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.664 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.664 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.923 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.923 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.923 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.180 00:12:47.180 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.180 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.180 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.450 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.450 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.450 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.450 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.450 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.450 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.450 { 00:12:47.450 "cntlid": 107, 00:12:47.450 "qid": 0, 00:12:47.450 "state": "enabled", 00:12:47.450 "thread": "nvmf_tgt_poll_group_000", 00:12:47.450 "listen_address": { 00:12:47.450 "trtype": "TCP", 00:12:47.450 "adrfam": "IPv4", 00:12:47.450 "traddr": "10.0.0.2", 00:12:47.450 "trsvcid": "4420" 00:12:47.450 }, 00:12:47.450 "peer_address": { 00:12:47.450 "trtype": "TCP", 00:12:47.450 "adrfam": "IPv4", 00:12:47.450 "traddr": "10.0.0.1", 00:12:47.450 "trsvcid": "43000" 00:12:47.450 }, 00:12:47.450 "auth": { 00:12:47.450 "state": "completed", 00:12:47.450 "digest": "sha512", 00:12:47.450 "dhgroup": "ffdhe2048" 00:12:47.450 } 00:12:47.450 } 00:12:47.450 ]' 00:12:47.450 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.450 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.450 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.450 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:47.450 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.450 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.450 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.450 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.015 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:12:48.594 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.594 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:48.594 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.594 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.594 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.594 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.594 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.594 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.866 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.123 00:12:49.123 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.123 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.123 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.381 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.381 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.381 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.381 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.381 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.381 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.381 { 00:12:49.381 "cntlid": 109, 00:12:49.381 "qid": 0, 00:12:49.381 "state": "enabled", 00:12:49.381 "thread": "nvmf_tgt_poll_group_000", 00:12:49.381 "listen_address": { 00:12:49.381 "trtype": "TCP", 00:12:49.381 "adrfam": "IPv4", 00:12:49.381 "traddr": "10.0.0.2", 00:12:49.381 "trsvcid": "4420" 00:12:49.381 }, 00:12:49.381 "peer_address": { 00:12:49.381 "trtype": "TCP", 00:12:49.381 "adrfam": "IPv4", 00:12:49.381 "traddr": "10.0.0.1", 00:12:49.381 "trsvcid": "43022" 00:12:49.381 }, 00:12:49.381 "auth": { 00:12:49.381 "state": "completed", 00:12:49.381 "digest": "sha512", 00:12:49.381 "dhgroup": "ffdhe2048" 00:12:49.381 } 00:12:49.381 } 00:12:49.381 ]' 00:12:49.381 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.381 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.381 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.640 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:49.640 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.640 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.640 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.640 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.898 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:12:50.463 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.722 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:50.722 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.722 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.722 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.722 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.722 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:50.722 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.980 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:51.239 00:12:51.239 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.239 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.239 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.497 { 00:12:51.497 "cntlid": 111, 00:12:51.497 "qid": 0, 00:12:51.497 "state": "enabled", 00:12:51.497 "thread": "nvmf_tgt_poll_group_000", 00:12:51.497 "listen_address": { 00:12:51.497 "trtype": "TCP", 00:12:51.497 "adrfam": "IPv4", 00:12:51.497 "traddr": "10.0.0.2", 00:12:51.497 "trsvcid": "4420" 00:12:51.497 }, 00:12:51.497 "peer_address": { 00:12:51.497 "trtype": "TCP", 00:12:51.497 "adrfam": "IPv4", 00:12:51.497 "traddr": "10.0.0.1", 00:12:51.497 "trsvcid": "57542" 00:12:51.497 }, 00:12:51.497 "auth": { 00:12:51.497 "state": "completed", 00:12:51.497 "digest": "sha512", 00:12:51.497 "dhgroup": "ffdhe2048" 00:12:51.497 } 00:12:51.497 } 00:12:51.497 ]' 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:51.497 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.755 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.755 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.755 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.013 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:12:52.579 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.579 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:52.579 12:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.579 12:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.579 12:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.579 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.579 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.579 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:52.579 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.838 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.405 00:12:53.405 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.405 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.405 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.663 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.663 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.664 { 00:12:53.664 "cntlid": 113, 00:12:53.664 "qid": 0, 00:12:53.664 "state": "enabled", 00:12:53.664 "thread": "nvmf_tgt_poll_group_000", 00:12:53.664 "listen_address": { 00:12:53.664 "trtype": "TCP", 00:12:53.664 "adrfam": "IPv4", 00:12:53.664 "traddr": "10.0.0.2", 00:12:53.664 "trsvcid": "4420" 00:12:53.664 }, 00:12:53.664 "peer_address": { 00:12:53.664 "trtype": "TCP", 00:12:53.664 "adrfam": "IPv4", 00:12:53.664 "traddr": "10.0.0.1", 00:12:53.664 "trsvcid": "57568" 00:12:53.664 }, 00:12:53.664 "auth": { 00:12:53.664 "state": "completed", 00:12:53.664 "digest": "sha512", 00:12:53.664 "dhgroup": "ffdhe3072" 00:12:53.664 } 00:12:53.664 } 00:12:53.664 ]' 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.664 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.922 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.858 12:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.115 12:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.115 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.115 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.373 00:12:55.373 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.373 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.373 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.631 { 00:12:55.631 "cntlid": 115, 00:12:55.631 "qid": 0, 00:12:55.631 "state": "enabled", 00:12:55.631 "thread": "nvmf_tgt_poll_group_000", 00:12:55.631 "listen_address": { 00:12:55.631 "trtype": "TCP", 00:12:55.631 "adrfam": "IPv4", 00:12:55.631 "traddr": "10.0.0.2", 00:12:55.631 "trsvcid": "4420" 00:12:55.631 }, 00:12:55.631 "peer_address": { 00:12:55.631 "trtype": "TCP", 00:12:55.631 "adrfam": "IPv4", 00:12:55.631 "traddr": "10.0.0.1", 00:12:55.631 "trsvcid": "57596" 00:12:55.631 }, 00:12:55.631 "auth": { 00:12:55.631 "state": "completed", 00:12:55.631 "digest": "sha512", 00:12:55.631 "dhgroup": "ffdhe3072" 00:12:55.631 } 00:12:55.631 } 00:12:55.631 ]' 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:55.631 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.889 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.889 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.889 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.147 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:12:56.714 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.714 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:56.714 12:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.714 12:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.714 12:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.714 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.714 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:56.714 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:57.284 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:57.284 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.284 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:57.284 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:57.284 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:57.284 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.284 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.285 12:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.285 12:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.285 12:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.285 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.285 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.542 00:12:57.542 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.542 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.542 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.800 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.801 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.801 12:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.801 12:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.801 12:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.801 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.801 { 00:12:57.801 "cntlid": 117, 00:12:57.801 "qid": 0, 00:12:57.801 "state": "enabled", 00:12:57.801 "thread": "nvmf_tgt_poll_group_000", 00:12:57.801 "listen_address": { 00:12:57.801 "trtype": "TCP", 00:12:57.801 "adrfam": "IPv4", 00:12:57.801 "traddr": "10.0.0.2", 00:12:57.801 "trsvcid": "4420" 00:12:57.801 }, 00:12:57.801 "peer_address": { 00:12:57.801 "trtype": "TCP", 00:12:57.801 "adrfam": "IPv4", 00:12:57.801 "traddr": "10.0.0.1", 00:12:57.801 "trsvcid": "57628" 00:12:57.801 }, 00:12:57.801 "auth": { 00:12:57.801 "state": "completed", 00:12:57.801 "digest": "sha512", 00:12:57.801 "dhgroup": "ffdhe3072" 00:12:57.801 } 00:12:57.801 } 00:12:57.801 ]' 00:12:57.801 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.801 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.801 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.059 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:58.059 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.059 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.059 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.059 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.318 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.250 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.506 00:12:59.506 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.506 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.506 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.072 { 00:13:00.072 "cntlid": 119, 00:13:00.072 "qid": 0, 00:13:00.072 "state": "enabled", 00:13:00.072 "thread": "nvmf_tgt_poll_group_000", 00:13:00.072 "listen_address": { 00:13:00.072 "trtype": "TCP", 00:13:00.072 "adrfam": "IPv4", 00:13:00.072 "traddr": "10.0.0.2", 00:13:00.072 "trsvcid": "4420" 00:13:00.072 }, 00:13:00.072 "peer_address": { 00:13:00.072 "trtype": "TCP", 00:13:00.072 "adrfam": "IPv4", 00:13:00.072 "traddr": "10.0.0.1", 00:13:00.072 "trsvcid": "49120" 00:13:00.072 }, 00:13:00.072 "auth": { 00:13:00.072 "state": "completed", 00:13:00.072 "digest": "sha512", 00:13:00.072 "dhgroup": "ffdhe3072" 00:13:00.072 } 00:13:00.072 } 00:13:00.072 ]' 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.072 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.345 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:13:00.912 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.912 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:00.912 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.912 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.912 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.912 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:00.912 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.912 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:00.912 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.170 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.737 00:13:01.737 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.737 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.737 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.996 { 00:13:01.996 "cntlid": 121, 00:13:01.996 "qid": 0, 00:13:01.996 "state": "enabled", 00:13:01.996 "thread": "nvmf_tgt_poll_group_000", 00:13:01.996 "listen_address": { 00:13:01.996 "trtype": "TCP", 00:13:01.996 "adrfam": "IPv4", 00:13:01.996 "traddr": "10.0.0.2", 00:13:01.996 "trsvcid": "4420" 00:13:01.996 }, 00:13:01.996 "peer_address": { 00:13:01.996 "trtype": "TCP", 00:13:01.996 "adrfam": "IPv4", 00:13:01.996 "traddr": "10.0.0.1", 00:13:01.996 "trsvcid": "49152" 00:13:01.996 }, 00:13:01.996 "auth": { 00:13:01.996 "state": "completed", 00:13:01.996 "digest": "sha512", 00:13:01.996 "dhgroup": "ffdhe4096" 00:13:01.996 } 00:13:01.996 } 00:13:01.996 ]' 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.996 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.563 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:13:03.130 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.130 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:03.130 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.130 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.130 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.130 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.130 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:03.131 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.389 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.647 00:13:03.906 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.906 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.906 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:04.165 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:04.166 { 00:13:04.166 "cntlid": 123, 00:13:04.166 "qid": 0, 00:13:04.166 "state": "enabled", 00:13:04.166 "thread": "nvmf_tgt_poll_group_000", 00:13:04.166 "listen_address": { 00:13:04.166 "trtype": "TCP", 00:13:04.166 "adrfam": "IPv4", 00:13:04.166 "traddr": "10.0.0.2", 00:13:04.166 "trsvcid": "4420" 00:13:04.166 }, 00:13:04.166 "peer_address": { 00:13:04.166 "trtype": "TCP", 00:13:04.166 "adrfam": "IPv4", 00:13:04.166 "traddr": "10.0.0.1", 00:13:04.166 "trsvcid": "49180" 00:13:04.166 }, 00:13:04.166 "auth": { 00:13:04.166 "state": "completed", 00:13:04.166 "digest": "sha512", 00:13:04.166 "dhgroup": "ffdhe4096" 00:13:04.166 } 00:13:04.166 } 00:13:04.166 ]' 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.166 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.424 12:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:13:05.382 12:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.382 12:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:05.382 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.382 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.382 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.382 12:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.382 12:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:05.382 12:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.382 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.948 00:13:05.948 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.948 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.948 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:06.206 { 00:13:06.206 "cntlid": 125, 00:13:06.206 "qid": 0, 00:13:06.206 "state": "enabled", 00:13:06.206 "thread": "nvmf_tgt_poll_group_000", 00:13:06.206 "listen_address": { 00:13:06.206 "trtype": "TCP", 00:13:06.206 "adrfam": "IPv4", 00:13:06.206 "traddr": "10.0.0.2", 00:13:06.206 "trsvcid": "4420" 00:13:06.206 }, 00:13:06.206 "peer_address": { 00:13:06.206 "trtype": "TCP", 00:13:06.206 "adrfam": "IPv4", 00:13:06.206 "traddr": "10.0.0.1", 00:13:06.206 "trsvcid": "49198" 00:13:06.206 }, 00:13:06.206 "auth": { 00:13:06.206 "state": "completed", 00:13:06.206 "digest": "sha512", 00:13:06.206 "dhgroup": "ffdhe4096" 00:13:06.206 } 00:13:06.206 } 00:13:06.206 ]' 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.206 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.771 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:13:07.335 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.336 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:07.336 12:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.336 12:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.336 12:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.336 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.336 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:07.336 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.593 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.851 00:13:07.851 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.851 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.851 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.124 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.124 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.124 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.124 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.124 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.124 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.124 { 00:13:08.124 "cntlid": 127, 00:13:08.124 "qid": 0, 00:13:08.124 "state": "enabled", 00:13:08.124 "thread": "nvmf_tgt_poll_group_000", 00:13:08.124 "listen_address": { 00:13:08.124 "trtype": "TCP", 00:13:08.124 "adrfam": "IPv4", 00:13:08.124 "traddr": "10.0.0.2", 00:13:08.124 "trsvcid": "4420" 00:13:08.124 }, 00:13:08.124 "peer_address": { 00:13:08.124 "trtype": "TCP", 00:13:08.124 "adrfam": "IPv4", 00:13:08.124 "traddr": "10.0.0.1", 00:13:08.124 "trsvcid": "49222" 00:13:08.124 }, 00:13:08.124 "auth": { 00:13:08.124 "state": "completed", 00:13:08.124 "digest": "sha512", 00:13:08.124 "dhgroup": "ffdhe4096" 00:13:08.124 } 00:13:08.124 } 00:13:08.124 ]' 00:13:08.124 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.124 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.124 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.381 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:08.381 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.381 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.381 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.381 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.639 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:13:09.223 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.223 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:09.223 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.223 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.223 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.223 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.223 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.223 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:09.223 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.481 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.047 00:13:10.047 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.047 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.047 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.306 { 00:13:10.306 "cntlid": 129, 00:13:10.306 "qid": 0, 00:13:10.306 "state": "enabled", 00:13:10.306 "thread": "nvmf_tgt_poll_group_000", 00:13:10.306 "listen_address": { 00:13:10.306 "trtype": "TCP", 00:13:10.306 "adrfam": "IPv4", 00:13:10.306 "traddr": "10.0.0.2", 00:13:10.306 "trsvcid": "4420" 00:13:10.306 }, 00:13:10.306 "peer_address": { 00:13:10.306 "trtype": "TCP", 00:13:10.306 "adrfam": "IPv4", 00:13:10.306 "traddr": "10.0.0.1", 00:13:10.306 "trsvcid": "43590" 00:13:10.306 }, 00:13:10.306 "auth": { 00:13:10.306 "state": "completed", 00:13:10.306 "digest": "sha512", 00:13:10.306 "dhgroup": "ffdhe6144" 00:13:10.306 } 00:13:10.306 } 00:13:10.306 ]' 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.306 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.872 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:13:11.438 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.438 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:11.438 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.438 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.438 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.438 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.438 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:11.438 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.698 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.956 00:13:11.956 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.956 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.956 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.522 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.522 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.522 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.522 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.522 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.523 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.523 { 00:13:12.523 "cntlid": 131, 00:13:12.523 "qid": 0, 00:13:12.523 "state": "enabled", 00:13:12.523 "thread": "nvmf_tgt_poll_group_000", 00:13:12.523 "listen_address": { 00:13:12.523 "trtype": "TCP", 00:13:12.523 "adrfam": "IPv4", 00:13:12.523 "traddr": "10.0.0.2", 00:13:12.523 "trsvcid": "4420" 00:13:12.523 }, 00:13:12.523 "peer_address": { 00:13:12.523 "trtype": "TCP", 00:13:12.523 "adrfam": "IPv4", 00:13:12.523 "traddr": "10.0.0.1", 00:13:12.523 "trsvcid": "43618" 00:13:12.523 }, 00:13:12.523 "auth": { 00:13:12.523 "state": "completed", 00:13:12.523 "digest": "sha512", 00:13:12.523 "dhgroup": "ffdhe6144" 00:13:12.523 } 00:13:12.523 } 00:13:12.523 ]' 00:13:12.523 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.523 12:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.523 12:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.523 12:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:12.523 12:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.523 12:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.523 12:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.523 12:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.780 12:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:13:13.345 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.603 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:13.603 12:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.603 12:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.603 12:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.603 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.603 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:13.603 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:13.860 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:13.860 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:13.860 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:13.860 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:13.861 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:13.861 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.861 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.861 12:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.861 12:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.861 12:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.861 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.861 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.118 00:13:14.118 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.118 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.118 12:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.375 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.375 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.375 12:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.375 12:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.375 12:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.375 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:14.375 { 00:13:14.375 "cntlid": 133, 00:13:14.375 "qid": 0, 00:13:14.375 "state": "enabled", 00:13:14.375 "thread": "nvmf_tgt_poll_group_000", 00:13:14.375 "listen_address": { 00:13:14.375 "trtype": "TCP", 00:13:14.375 "adrfam": "IPv4", 00:13:14.375 "traddr": "10.0.0.2", 00:13:14.375 "trsvcid": "4420" 00:13:14.375 }, 00:13:14.375 "peer_address": { 00:13:14.375 "trtype": "TCP", 00:13:14.375 "adrfam": "IPv4", 00:13:14.375 "traddr": "10.0.0.1", 00:13:14.375 "trsvcid": "43658" 00:13:14.375 }, 00:13:14.375 "auth": { 00:13:14.375 "state": "completed", 00:13:14.375 "digest": "sha512", 00:13:14.375 "dhgroup": "ffdhe6144" 00:13:14.375 } 00:13:14.375 } 00:13:14.375 ]' 00:13:14.375 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.633 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.633 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.633 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:14.633 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:14.633 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.633 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.633 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.891 12:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.823 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.824 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.389 00:13:16.389 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.389 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.389 12:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.648 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.648 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.648 12:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.648 12:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.648 12:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.648 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:16.648 { 00:13:16.648 "cntlid": 135, 00:13:16.648 "qid": 0, 00:13:16.648 "state": "enabled", 00:13:16.648 "thread": "nvmf_tgt_poll_group_000", 00:13:16.648 "listen_address": { 00:13:16.648 "trtype": "TCP", 00:13:16.648 "adrfam": "IPv4", 00:13:16.648 "traddr": "10.0.0.2", 00:13:16.648 "trsvcid": "4420" 00:13:16.648 }, 00:13:16.648 "peer_address": { 00:13:16.648 "trtype": "TCP", 00:13:16.648 "adrfam": "IPv4", 00:13:16.648 "traddr": "10.0.0.1", 00:13:16.648 "trsvcid": "43688" 00:13:16.648 }, 00:13:16.648 "auth": { 00:13:16.648 "state": "completed", 00:13:16.648 "digest": "sha512", 00:13:16.648 "dhgroup": "ffdhe6144" 00:13:16.648 } 00:13:16.648 } 00:13:16.648 ]' 00:13:16.648 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:16.648 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.648 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:16.907 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:16.907 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:16.907 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.907 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.907 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.166 12:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:13:17.755 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.755 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:17.755 12:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.755 12:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.755 12:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.755 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.755 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:17.755 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:17.755 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.319 12:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.887 00:13:18.887 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.887 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.887 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.146 { 00:13:19.146 "cntlid": 137, 00:13:19.146 "qid": 0, 00:13:19.146 "state": "enabled", 00:13:19.146 "thread": "nvmf_tgt_poll_group_000", 00:13:19.146 "listen_address": { 00:13:19.146 "trtype": "TCP", 00:13:19.146 "adrfam": "IPv4", 00:13:19.146 "traddr": "10.0.0.2", 00:13:19.146 "trsvcid": "4420" 00:13:19.146 }, 00:13:19.146 "peer_address": { 00:13:19.146 "trtype": "TCP", 00:13:19.146 "adrfam": "IPv4", 00:13:19.146 "traddr": "10.0.0.1", 00:13:19.146 "trsvcid": "43702" 00:13:19.146 }, 00:13:19.146 "auth": { 00:13:19.146 "state": "completed", 00:13:19.146 "digest": "sha512", 00:13:19.146 "dhgroup": "ffdhe8192" 00:13:19.146 } 00:13:19.146 } 00:13:19.146 ]' 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.146 12:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.714 12:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:13:20.281 12:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.281 12:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:20.281 12:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.281 12:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.281 12:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.281 12:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.281 12:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.281 12:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.540 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.104 00:13:21.104 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.104 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.104 12:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.670 { 00:13:21.670 "cntlid": 139, 00:13:21.670 "qid": 0, 00:13:21.670 "state": "enabled", 00:13:21.670 "thread": "nvmf_tgt_poll_group_000", 00:13:21.670 "listen_address": { 00:13:21.670 "trtype": "TCP", 00:13:21.670 "adrfam": "IPv4", 00:13:21.670 "traddr": "10.0.0.2", 00:13:21.670 "trsvcid": "4420" 00:13:21.670 }, 00:13:21.670 "peer_address": { 00:13:21.670 "trtype": "TCP", 00:13:21.670 "adrfam": "IPv4", 00:13:21.670 "traddr": "10.0.0.1", 00:13:21.670 "trsvcid": "37472" 00:13:21.670 }, 00:13:21.670 "auth": { 00:13:21.670 "state": "completed", 00:13:21.670 "digest": "sha512", 00:13:21.670 "dhgroup": "ffdhe8192" 00:13:21.670 } 00:13:21.670 } 00:13:21.670 ]' 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.670 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.928 12:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:01:YjhlNTkwZjY2MWQ3NmE3ZDNmZjc2MDA0MDRlZDUyNGaQYm6q: --dhchap-ctrl-secret DHHC-1:02:NTRjZjNmYzlkNTNiYzAwNzgzNmJiOTgyZTUyN2ZjMmVkYWQ5NWEyMzc2NTIxYjBmwDNCFw==: 00:13:22.863 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.863 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:22.863 12:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.863 12:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.863 12:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.863 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.863 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:22.863 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:22.863 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.864 12:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.430 00:13:23.689 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.689 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.689 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.689 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.689 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.689 12:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.689 12:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.949 { 00:13:23.949 "cntlid": 141, 00:13:23.949 "qid": 0, 00:13:23.949 "state": "enabled", 00:13:23.949 "thread": "nvmf_tgt_poll_group_000", 00:13:23.949 "listen_address": { 00:13:23.949 "trtype": "TCP", 00:13:23.949 "adrfam": "IPv4", 00:13:23.949 "traddr": "10.0.0.2", 00:13:23.949 "trsvcid": "4420" 00:13:23.949 }, 00:13:23.949 "peer_address": { 00:13:23.949 "trtype": "TCP", 00:13:23.949 "adrfam": "IPv4", 00:13:23.949 "traddr": "10.0.0.1", 00:13:23.949 "trsvcid": "37498" 00:13:23.949 }, 00:13:23.949 "auth": { 00:13:23.949 "state": "completed", 00:13:23.949 "digest": "sha512", 00:13:23.949 "dhgroup": "ffdhe8192" 00:13:23.949 } 00:13:23.949 } 00:13:23.949 ]' 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.949 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.208 12:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:02:YzQ3OWY0YmE2ZmZmYzllNDgwNDYwY2RiY2RhNWUzZDUxYmJmOTllMWJkMTlkMTMx+ZnMWQ==: --dhchap-ctrl-secret DHHC-1:01:NjllYzMyYzJiMmNkM2ZiMWRkZDNiZmU2ZDFiYWI4NTeNDEcb: 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.144 12:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.080 00:13:26.080 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.080 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.080 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.080 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.080 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.080 12:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.081 12:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.081 12:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.081 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.081 { 00:13:26.081 "cntlid": 143, 00:13:26.081 "qid": 0, 00:13:26.081 "state": "enabled", 00:13:26.081 "thread": "nvmf_tgt_poll_group_000", 00:13:26.081 "listen_address": { 00:13:26.081 "trtype": "TCP", 00:13:26.081 "adrfam": "IPv4", 00:13:26.081 "traddr": "10.0.0.2", 00:13:26.081 "trsvcid": "4420" 00:13:26.081 }, 00:13:26.081 "peer_address": { 00:13:26.081 "trtype": "TCP", 00:13:26.081 "adrfam": "IPv4", 00:13:26.081 "traddr": "10.0.0.1", 00:13:26.081 "trsvcid": "37540" 00:13:26.081 }, 00:13:26.081 "auth": { 00:13:26.081 "state": "completed", 00:13:26.081 "digest": "sha512", 00:13:26.081 "dhgroup": "ffdhe8192" 00:13:26.081 } 00:13:26.081 } 00:13:26.081 ]' 00:13:26.081 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.338 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.338 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.338 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:26.338 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.338 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.338 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.338 12:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.596 12:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:27.531 12:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.531 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.465 00:13:28.465 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.465 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.465 12:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.723 { 00:13:28.723 "cntlid": 145, 00:13:28.723 "qid": 0, 00:13:28.723 "state": "enabled", 00:13:28.723 "thread": "nvmf_tgt_poll_group_000", 00:13:28.723 "listen_address": { 00:13:28.723 "trtype": "TCP", 00:13:28.723 "adrfam": "IPv4", 00:13:28.723 "traddr": "10.0.0.2", 00:13:28.723 "trsvcid": "4420" 00:13:28.723 }, 00:13:28.723 "peer_address": { 00:13:28.723 "trtype": "TCP", 00:13:28.723 "adrfam": "IPv4", 00:13:28.723 "traddr": "10.0.0.1", 00:13:28.723 "trsvcid": "37560" 00:13:28.723 }, 00:13:28.723 "auth": { 00:13:28.723 "state": "completed", 00:13:28.723 "digest": "sha512", 00:13:28.723 "dhgroup": "ffdhe8192" 00:13:28.723 } 00:13:28.723 } 00:13:28.723 ]' 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.723 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.981 12:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:00:OWFkYTQ0ZjNhNjE2OGVjYmFkYjNiMjMyNTZjZGM0ZjU3N2ZmMTNjMGU4MzdkM2VjXtaczQ==: --dhchap-ctrl-secret DHHC-1:03:NGZkZmI1MzFlYzFiZWFiOTAzYWZkNTRiNDllMTY4MDFlZjc3NTBhMDdkYTczNDVlZTI3M2JiNDI2YWE3MTgxMPsANhk=: 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:29.915 12:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:30.481 request: 00:13:30.481 { 00:13:30.481 "name": "nvme0", 00:13:30.481 "trtype": "tcp", 00:13:30.481 "traddr": "10.0.0.2", 00:13:30.481 "adrfam": "ipv4", 00:13:30.481 "trsvcid": "4420", 00:13:30.481 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:30.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c", 00:13:30.481 "prchk_reftag": false, 00:13:30.481 "prchk_guard": false, 00:13:30.481 "hdgst": false, 00:13:30.481 "ddgst": false, 00:13:30.481 "dhchap_key": "key2", 00:13:30.481 "method": "bdev_nvme_attach_controller", 00:13:30.481 "req_id": 1 00:13:30.481 } 00:13:30.481 Got JSON-RPC error response 00:13:30.481 response: 00:13:30.481 { 00:13:30.481 "code": -5, 00:13:30.481 "message": "Input/output error" 00:13:30.481 } 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:30.481 12:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:31.048 request: 00:13:31.048 { 00:13:31.048 "name": "nvme0", 00:13:31.048 "trtype": "tcp", 00:13:31.048 "traddr": "10.0.0.2", 00:13:31.048 "adrfam": "ipv4", 00:13:31.048 "trsvcid": "4420", 00:13:31.048 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:31.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c", 00:13:31.048 "prchk_reftag": false, 00:13:31.048 "prchk_guard": false, 00:13:31.048 "hdgst": false, 00:13:31.049 "ddgst": false, 00:13:31.049 "dhchap_key": "key1", 00:13:31.049 "dhchap_ctrlr_key": "ckey2", 00:13:31.049 "method": "bdev_nvme_attach_controller", 00:13:31.049 "req_id": 1 00:13:31.049 } 00:13:31.049 Got JSON-RPC error response 00:13:31.049 response: 00:13:31.049 { 00:13:31.049 "code": -5, 00:13:31.049 "message": "Input/output error" 00:13:31.049 } 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key1 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.049 12:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.640 request: 00:13:31.640 { 00:13:31.640 "name": "nvme0", 00:13:31.640 "trtype": "tcp", 00:13:31.640 "traddr": "10.0.0.2", 00:13:31.640 "adrfam": "ipv4", 00:13:31.640 "trsvcid": "4420", 00:13:31.640 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:31.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c", 00:13:31.640 "prchk_reftag": false, 00:13:31.640 "prchk_guard": false, 00:13:31.640 "hdgst": false, 00:13:31.640 "ddgst": false, 00:13:31.640 "dhchap_key": "key1", 00:13:31.640 "dhchap_ctrlr_key": "ckey1", 00:13:31.640 "method": "bdev_nvme_attach_controller", 00:13:31.640 "req_id": 1 00:13:31.640 } 00:13:31.640 Got JSON-RPC error response 00:13:31.640 response: 00:13:31.640 { 00:13:31.640 "code": -5, 00:13:31.640 "message": "Input/output error" 00:13:31.640 } 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69379 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69379 ']' 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69379 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69379 00:13:31.641 killing process with pid 69379 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69379' 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69379 00:13:31.641 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69379 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72402 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72402 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72402 ']' 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.900 12:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72402 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72402 ']' 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.275 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.533 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.533 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:33.533 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.533 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:33.533 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:33.533 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:33.533 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.533 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:13:33.534 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.534 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.534 12:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.534 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:33.534 12:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:34.099 00:13:34.099 12:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.099 12:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.099 12:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.357 12:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.357 12:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.357 12:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.357 12:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.357 12:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.357 12:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.357 { 00:13:34.357 "cntlid": 1, 00:13:34.357 "qid": 0, 00:13:34.357 "state": "enabled", 00:13:34.357 "thread": "nvmf_tgt_poll_group_000", 00:13:34.357 "listen_address": { 00:13:34.357 "trtype": "TCP", 00:13:34.357 "adrfam": "IPv4", 00:13:34.357 "traddr": "10.0.0.2", 00:13:34.357 "trsvcid": "4420" 00:13:34.357 }, 00:13:34.358 "peer_address": { 00:13:34.358 "trtype": "TCP", 00:13:34.358 "adrfam": "IPv4", 00:13:34.358 "traddr": "10.0.0.1", 00:13:34.358 "trsvcid": "51752" 00:13:34.358 }, 00:13:34.358 "auth": { 00:13:34.358 "state": "completed", 00:13:34.358 "digest": "sha512", 00:13:34.358 "dhgroup": "ffdhe8192" 00:13:34.358 } 00:13:34.358 } 00:13:34.358 ]' 00:13:34.358 12:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.358 12:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.358 12:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.615 12:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.615 12:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.615 12:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.615 12:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.615 12:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.872 12:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid 88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-secret DHHC-1:03:YjA0ZmY1M2IwOWQzMzQxZTQwMjZiNGU0OWMzYzRkZTBjZmVkNGZkM2Y5ZDJlMjc1NWM4NWUwNjI4MmU0Y2ZiNphaUhA=: 00:13:35.438 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.438 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:35.438 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.438 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.438 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --dhchap-key key3 00:13:35.438 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.438 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.696 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.261 request: 00:13:36.261 { 00:13:36.261 "name": "nvme0", 00:13:36.261 "trtype": "tcp", 00:13:36.261 "traddr": "10.0.0.2", 00:13:36.261 "adrfam": "ipv4", 00:13:36.261 "trsvcid": "4420", 00:13:36.261 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:36.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c", 00:13:36.261 "prchk_reftag": false, 00:13:36.261 "prchk_guard": false, 00:13:36.261 "hdgst": false, 00:13:36.261 "ddgst": false, 00:13:36.261 "dhchap_key": "key3", 00:13:36.261 "method": "bdev_nvme_attach_controller", 00:13:36.261 "req_id": 1 00:13:36.261 } 00:13:36.261 Got JSON-RPC error response 00:13:36.261 response: 00:13:36.261 { 00:13:36.261 "code": -5, 00:13:36.261 "message": "Input/output error" 00:13:36.261 } 00:13:36.261 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:36.261 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:36.261 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:36.261 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:36.261 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:36.261 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:36.261 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:36.261 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:36.519 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.519 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:36.519 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.519 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:36.519 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.519 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:36.519 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.519 12:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.519 12:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.778 request: 00:13:36.778 { 00:13:36.778 "name": "nvme0", 00:13:36.778 "trtype": "tcp", 00:13:36.778 "traddr": "10.0.0.2", 00:13:36.778 "adrfam": "ipv4", 00:13:36.778 "trsvcid": "4420", 00:13:36.778 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:36.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c", 00:13:36.778 "prchk_reftag": false, 00:13:36.778 "prchk_guard": false, 00:13:36.778 "hdgst": false, 00:13:36.778 "ddgst": false, 00:13:36.778 "dhchap_key": "key3", 00:13:36.778 "method": "bdev_nvme_attach_controller", 00:13:36.778 "req_id": 1 00:13:36.778 } 00:13:36.778 Got JSON-RPC error response 00:13:36.778 response: 00:13:36.778 { 00:13:36.778 "code": -5, 00:13:36.778 "message": "Input/output error" 00:13:36.778 } 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:36.778 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:37.036 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:37.037 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:37.037 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:37.037 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:37.295 request: 00:13:37.295 { 00:13:37.295 "name": "nvme0", 00:13:37.295 "trtype": "tcp", 00:13:37.295 "traddr": "10.0.0.2", 00:13:37.295 "adrfam": "ipv4", 00:13:37.295 "trsvcid": "4420", 00:13:37.295 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:37.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c", 00:13:37.295 "prchk_reftag": false, 00:13:37.295 "prchk_guard": false, 00:13:37.295 "hdgst": false, 00:13:37.295 "ddgst": false, 00:13:37.295 "dhchap_key": "key0", 00:13:37.295 "dhchap_ctrlr_key": "key1", 00:13:37.295 "method": "bdev_nvme_attach_controller", 00:13:37.295 "req_id": 1 00:13:37.295 } 00:13:37.295 Got JSON-RPC error response 00:13:37.295 response: 00:13:37.295 { 00:13:37.295 "code": -5, 00:13:37.295 "message": "Input/output error" 00:13:37.295 } 00:13:37.295 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:37.295 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:37.295 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:37.295 12:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:37.295 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:37.295 12:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:37.553 00:13:37.553 12:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:37.553 12:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:37.553 12:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.813 12:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.813 12:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.813 12:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69411 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69411 ']' 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69411 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69411 00:13:38.119 killing process with pid 69411 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69411' 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69411 00:13:38.119 12:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69411 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:38.710 rmmod nvme_tcp 00:13:38.710 rmmod nvme_fabrics 00:13:38.710 rmmod nvme_keyring 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72402 ']' 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72402 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72402 ']' 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72402 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72402 00:13:38.710 killing process with pid 72402 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72402' 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72402 00:13:38.710 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72402 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.M7F /tmp/spdk.key-sha256.WOR /tmp/spdk.key-sha384.Gjz /tmp/spdk.key-sha512.aXi /tmp/spdk.key-sha512.3U3 /tmp/spdk.key-sha384.25V /tmp/spdk.key-sha256.Fhu '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:39.279 00:13:39.279 real 2m48.446s 00:13:39.279 user 6m41.984s 00:13:39.279 sys 0m28.167s 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.279 12:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.279 ************************************ 00:13:39.279 END TEST nvmf_auth_target 00:13:39.279 ************************************ 00:13:39.279 12:38:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:39.279 12:38:11 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:39.279 12:38:11 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:39.279 12:38:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:39.279 12:38:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.279 12:38:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:39.279 ************************************ 00:13:39.279 START TEST nvmf_bdevio_no_huge 00:13:39.279 ************************************ 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:39.279 * Looking for test storage... 00:13:39.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:39.279 Cannot find device "nvmf_tgt_br" 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:39.279 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:39.279 Cannot find device "nvmf_tgt_br2" 00:13:39.280 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:39.280 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:39.280 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:39.280 Cannot find device "nvmf_tgt_br" 00:13:39.280 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:39.280 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:39.280 Cannot find device "nvmf_tgt_br2" 00:13:39.280 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:39.280 12:38:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:39.538 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:39.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:39.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:39.539 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:39.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:13:39.798 00:13:39.798 --- 10.0.0.2 ping statistics --- 00:13:39.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.798 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:39.798 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:39.798 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:13:39.798 00:13:39.798 --- 10.0.0.3 ping statistics --- 00:13:39.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.798 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:39.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:13:39.798 00:13:39.798 --- 10.0.0.1 ping statistics --- 00:13:39.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.798 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72720 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72720 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72720 ']' 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.798 12:38:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:39.798 [2024-07-15 12:38:12.340481] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:39.798 [2024-07-15 12:38:12.341325] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:40.058 [2024-07-15 12:38:12.494496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.058 [2024-07-15 12:38:12.637623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.058 [2024-07-15 12:38:12.637686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.058 [2024-07-15 12:38:12.637701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.058 [2024-07-15 12:38:12.637713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.058 [2024-07-15 12:38:12.637722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.058 [2024-07-15 12:38:12.637875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:40.058 [2024-07-15 12:38:12.638080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:40.058 [2024-07-15 12:38:12.638648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:40.058 [2024-07-15 12:38:12.638670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.058 [2024-07-15 12:38:12.643989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.991 [2024-07-15 12:38:13.407117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.991 Malloc0 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.991 [2024-07-15 12:38:13.447758] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:40.991 { 00:13:40.991 "params": { 00:13:40.991 "name": "Nvme$subsystem", 00:13:40.991 "trtype": "$TEST_TRANSPORT", 00:13:40.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:40.991 "adrfam": "ipv4", 00:13:40.991 "trsvcid": "$NVMF_PORT", 00:13:40.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:40.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:40.991 "hdgst": ${hdgst:-false}, 00:13:40.991 "ddgst": ${ddgst:-false} 00:13:40.991 }, 00:13:40.991 "method": "bdev_nvme_attach_controller" 00:13:40.991 } 00:13:40.991 EOF 00:13:40.991 )") 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:40.991 12:38:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:40.991 "params": { 00:13:40.991 "name": "Nvme1", 00:13:40.991 "trtype": "tcp", 00:13:40.991 "traddr": "10.0.0.2", 00:13:40.991 "adrfam": "ipv4", 00:13:40.991 "trsvcid": "4420", 00:13:40.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:40.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:40.991 "hdgst": false, 00:13:40.991 "ddgst": false 00:13:40.991 }, 00:13:40.991 "method": "bdev_nvme_attach_controller" 00:13:40.991 }' 00:13:40.991 [2024-07-15 12:38:13.505047] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:40.991 [2024-07-15 12:38:13.505161] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72756 ] 00:13:40.991 [2024-07-15 12:38:13.648322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.249 [2024-07-15 12:38:13.781249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.249 [2024-07-15 12:38:13.781441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.249 [2024-07-15 12:38:13.781443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.249 [2024-07-15 12:38:13.794325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:41.507 I/O targets: 00:13:41.507 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:41.507 00:13:41.507 00:13:41.507 CUnit - A unit testing framework for C - Version 2.1-3 00:13:41.507 http://cunit.sourceforge.net/ 00:13:41.507 00:13:41.507 00:13:41.507 Suite: bdevio tests on: Nvme1n1 00:13:41.507 Test: blockdev write read block ...passed 00:13:41.507 Test: blockdev write zeroes read block ...passed 00:13:41.507 Test: blockdev write zeroes read no split ...passed 00:13:41.507 Test: blockdev write zeroes read split ...passed 00:13:41.507 Test: blockdev write zeroes read split partial ...passed 00:13:41.507 Test: blockdev reset ...[2024-07-15 12:38:13.996515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:41.507 [2024-07-15 12:38:13.996677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2072870 (9): Bad file descriptor 00:13:41.507 [2024-07-15 12:38:14.010136] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:41.507 passed 00:13:41.507 Test: blockdev write read 8 blocks ...passed 00:13:41.507 Test: blockdev write read size > 128k ...passed 00:13:41.507 Test: blockdev write read invalid size ...passed 00:13:41.507 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:41.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:41.507 Test: blockdev write read max offset ...passed 00:13:41.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:41.507 Test: blockdev writev readv 8 blocks ...passed 00:13:41.507 Test: blockdev writev readv 30 x 1block ...passed 00:13:41.507 Test: blockdev writev readv block ...passed 00:13:41.507 Test: blockdev writev readv size > 128k ...passed 00:13:41.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:41.507 Test: blockdev comparev and writev ...[2024-07-15 12:38:14.021038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.507 [2024-07-15 12:38:14.021102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.021124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.507 [2024-07-15 12:38:14.021135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.021610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.507 [2024-07-15 12:38:14.021638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.021657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.507 [2024-07-15 12:38:14.021668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.022125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.507 [2024-07-15 12:38:14.022157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.022176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.507 [2024-07-15 12:38:14.022187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.022564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.507 [2024-07-15 12:38:14.022594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.022612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.507 [2024-07-15 12:38:14.022624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:41.507 passed 00:13:41.507 Test: blockdev nvme passthru rw ...passed 00:13:41.507 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:38:14.023915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.507 [2024-07-15 12:38:14.023947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.024309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.507 [2024-07-15 12:38:14.024340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.024569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.507 [2024-07-15 12:38:14.024658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:41.507 [2024-07-15 12:38:14.024837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.507 [2024-07-15 12:38:14.024867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:41.507 passed 00:13:41.507 Test: blockdev nvme admin passthru ...passed 00:13:41.507 Test: blockdev copy ...passed 00:13:41.507 00:13:41.507 Run Summary: Type Total Ran Passed Failed Inactive 00:13:41.507 suites 1 1 n/a 0 0 00:13:41.507 tests 23 23 23 0 0 00:13:41.507 asserts 152 152 152 0 n/a 00:13:41.507 00:13:41.507 Elapsed time = 0.166 seconds 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.073 rmmod nvme_tcp 00:13:42.073 rmmod nvme_fabrics 00:13:42.073 rmmod nvme_keyring 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72720 ']' 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72720 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72720 ']' 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72720 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72720 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72720' 00:13:42.073 killing process with pid 72720 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72720 00:13:42.073 12:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72720 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:42.641 00:13:42.641 real 0m3.327s 00:13:42.641 user 0m10.843s 00:13:42.641 sys 0m1.330s 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:42.641 12:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.641 ************************************ 00:13:42.641 END TEST nvmf_bdevio_no_huge 00:13:42.641 ************************************ 00:13:42.641 12:38:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:42.641 12:38:15 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:42.641 12:38:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:42.641 12:38:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.641 12:38:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:42.641 ************************************ 00:13:42.641 START TEST nvmf_tls 00:13:42.641 ************************************ 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:42.641 * Looking for test storage... 00:13:42.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:42.641 Cannot find device "nvmf_tgt_br" 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.641 Cannot find device "nvmf_tgt_br2" 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:42.641 Cannot find device "nvmf_tgt_br" 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:42.641 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:42.900 Cannot find device "nvmf_tgt_br2" 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:42.900 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.158 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.158 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.158 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:43.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:13:43.158 00:13:43.158 --- 10.0.0.2 ping statistics --- 00:13:43.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.158 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:13:43.158 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:43.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:13:43.159 00:13:43.159 --- 10.0.0.3 ping statistics --- 00:13:43.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.159 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:43.159 00:13:43.159 --- 10.0.0.1 ping statistics --- 00:13:43.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.159 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72951 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72951 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72951 ']' 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.159 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.159 [2024-07-15 12:38:15.694422] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:43.159 [2024-07-15 12:38:15.694525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.159 [2024-07-15 12:38:15.830031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.417 [2024-07-15 12:38:15.998519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.417 [2024-07-15 12:38:15.998597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.417 [2024-07-15 12:38:15.998611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.417 [2024-07-15 12:38:15.998620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.417 [2024-07-15 12:38:15.998628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.417 [2024-07-15 12:38:15.998659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:44.352 true 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:44.352 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:44.611 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:44.611 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:44.611 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:44.869 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:44.869 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:45.127 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:45.127 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:45.127 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:45.385 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:45.385 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:45.643 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:45.643 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:45.643 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:45.643 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:45.901 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:45.901 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:45.901 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:46.160 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:46.160 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:46.418 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:46.418 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:46.418 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:46.676 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:46.676 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:46.934 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ajMHV1bz32 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.9IkyMq36t5 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ajMHV1bz32 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9IkyMq36t5 00:13:47.192 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:47.450 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:47.709 [2024-07-15 12:38:20.227625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:47.709 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ajMHV1bz32 00:13:47.709 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ajMHV1bz32 00:13:47.709 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:47.968 [2024-07-15 12:38:20.542015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.968 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:48.226 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:48.484 [2024-07-15 12:38:21.022090] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:48.484 [2024-07-15 12:38:21.022465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.484 12:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:48.743 malloc0 00:13:48.743 12:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:49.001 12:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ajMHV1bz32 00:13:49.259 [2024-07-15 12:38:21.869058] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:49.259 12:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ajMHV1bz32 00:14:01.460 Initializing NVMe Controllers 00:14:01.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:01.460 Initialization complete. Launching workers. 00:14:01.460 ======================================================== 00:14:01.460 Latency(us) 00:14:01.460 Device Information : IOPS MiB/s Average min max 00:14:01.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9347.46 36.51 6848.59 1630.31 10995.22 00:14:01.460 ======================================================== 00:14:01.460 Total : 9347.46 36.51 6848.59 1630.31 10995.22 00:14:01.460 00:14:01.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ajMHV1bz32 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ajMHV1bz32' 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73191 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73191 /var/tmp/bdevperf.sock 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73191 ']' 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.460 12:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.460 [2024-07-15 12:38:32.156591] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:01.460 [2024-07-15 12:38:32.156699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73191 ] 00:14:01.460 [2024-07-15 12:38:32.307882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.460 [2024-07-15 12:38:32.438720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.460 [2024-07-15 12:38:32.496356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.460 12:38:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.460 12:38:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:01.460 12:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ajMHV1bz32 00:14:01.460 [2024-07-15 12:38:33.398755] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:01.460 [2024-07-15 12:38:33.398904] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:01.460 TLSTESTn1 00:14:01.460 12:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:01.460 Running I/O for 10 seconds... 00:14:11.445 00:14:11.445 Latency(us) 00:14:11.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.445 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:11.445 Verification LBA range: start 0x0 length 0x2000 00:14:11.445 TLSTESTn1 : 10.02 3724.76 14.55 0.00 0.00 34301.04 6076.97 35985.22 00:14:11.445 =================================================================================================================== 00:14:11.445 Total : 3724.76 14.55 0.00 0.00 34301.04 6076.97 35985.22 00:14:11.445 0 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73191 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73191 ']' 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73191 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73191 00:14:11.445 killing process with pid 73191 00:14:11.445 Received shutdown signal, test time was about 10.000000 seconds 00:14:11.445 00:14:11.445 Latency(us) 00:14:11.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.445 =================================================================================================================== 00:14:11.445 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73191' 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73191 00:14:11.445 [2024-07-15 12:38:43.680996] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73191 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9IkyMq36t5 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9IkyMq36t5 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9IkyMq36t5 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9IkyMq36t5' 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73319 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73319 /var/tmp/bdevperf.sock 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73319 ']' 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.445 12:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.445 [2024-07-15 12:38:43.986072] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:11.445 [2024-07-15 12:38:43.986491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73319 ] 00:14:11.702 [2024-07-15 12:38:44.129623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.702 [2024-07-15 12:38:44.252418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.702 [2024-07-15 12:38:44.309001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.268 12:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.268 12:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:12.268 12:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9IkyMq36t5 00:14:12.527 [2024-07-15 12:38:45.154840] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:12.527 [2024-07-15 12:38:45.154990] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:12.527 [2024-07-15 12:38:45.160657] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:12.527 [2024-07-15 12:38:45.160813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d11f0 (107): Transport endpoint is not connected 00:14:12.527 [2024-07-15 12:38:45.161800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d11f0 (9): Bad file descriptor 00:14:12.527 [2024-07-15 12:38:45.162794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:12.527 [2024-07-15 12:38:45.162827] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:12.527 [2024-07-15 12:38:45.162846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:12.527 request: 00:14:12.527 { 00:14:12.527 "name": "TLSTEST", 00:14:12.527 "trtype": "tcp", 00:14:12.527 "traddr": "10.0.0.2", 00:14:12.527 "adrfam": "ipv4", 00:14:12.527 "trsvcid": "4420", 00:14:12.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.527 "prchk_reftag": false, 00:14:12.527 "prchk_guard": false, 00:14:12.527 "hdgst": false, 00:14:12.527 "ddgst": false, 00:14:12.527 "psk": "/tmp/tmp.9IkyMq36t5", 00:14:12.527 "method": "bdev_nvme_attach_controller", 00:14:12.527 "req_id": 1 00:14:12.527 } 00:14:12.527 Got JSON-RPC error response 00:14:12.527 response: 00:14:12.527 { 00:14:12.527 "code": -5, 00:14:12.527 "message": "Input/output error" 00:14:12.527 } 00:14:12.527 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73319 00:14:12.527 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73319 ']' 00:14:12.527 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73319 00:14:12.527 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:12.527 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.527 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73319 00:14:12.785 killing process with pid 73319 00:14:12.785 Received shutdown signal, test time was about 10.000000 seconds 00:14:12.785 00:14:12.785 Latency(us) 00:14:12.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.785 =================================================================================================================== 00:14:12.785 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73319' 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73319 00:14:12.785 [2024-07-15 12:38:45.217040] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73319 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ajMHV1bz32 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ajMHV1bz32 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ajMHV1bz32 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ajMHV1bz32' 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73347 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73347 /var/tmp/bdevperf.sock 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73347 ']' 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:12.785 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.786 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:12.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:12.786 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.786 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.044 [2024-07-15 12:38:45.486170] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:13.044 [2024-07-15 12:38:45.486683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73347 ] 00:14:13.044 [2024-07-15 12:38:45.622465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.301 [2024-07-15 12:38:45.749008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.301 [2024-07-15 12:38:45.809460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.867 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.867 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:13.867 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ajMHV1bz32 00:14:14.126 [2024-07-15 12:38:46.638463] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:14.126 [2024-07-15 12:38:46.638599] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:14.126 [2024-07-15 12:38:46.650685] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:14.126 [2024-07-15 12:38:46.650776] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:14.126 [2024-07-15 12:38:46.650889] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:14.126 [2024-07-15 12:38:46.651492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196a1f0 (107): Transport endpoint is not connected 00:14:14.126 [2024-07-15 12:38:46.652474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196a1f0 (9): Bad file descriptor 00:14:14.126 [2024-07-15 12:38:46.653469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:14.126 [2024-07-15 12:38:46.653518] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:14.126 [2024-07-15 12:38:46.653553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:14.126 request: 00:14:14.126 { 00:14:14.126 "name": "TLSTEST", 00:14:14.126 "trtype": "tcp", 00:14:14.126 "traddr": "10.0.0.2", 00:14:14.126 "adrfam": "ipv4", 00:14:14.126 "trsvcid": "4420", 00:14:14.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.126 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:14.126 "prchk_reftag": false, 00:14:14.126 "prchk_guard": false, 00:14:14.126 "hdgst": false, 00:14:14.126 "ddgst": false, 00:14:14.126 "psk": "/tmp/tmp.ajMHV1bz32", 00:14:14.126 "method": "bdev_nvme_attach_controller", 00:14:14.126 "req_id": 1 00:14:14.126 } 00:14:14.126 Got JSON-RPC error response 00:14:14.126 response: 00:14:14.126 { 00:14:14.126 "code": -5, 00:14:14.126 "message": "Input/output error" 00:14:14.126 } 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73347 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73347 ']' 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73347 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73347 00:14:14.126 killing process with pid 73347 00:14:14.126 Received shutdown signal, test time was about 10.000000 seconds 00:14:14.126 00:14:14.126 Latency(us) 00:14:14.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.126 =================================================================================================================== 00:14:14.126 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73347' 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73347 00:14:14.126 [2024-07-15 12:38:46.700205] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:14.126 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73347 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ajMHV1bz32 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ajMHV1bz32 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ajMHV1bz32 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ajMHV1bz32' 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73374 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.384 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73374 /var/tmp/bdevperf.sock 00:14:14.385 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73374 ']' 00:14:14.385 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.385 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.385 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.385 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.385 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.385 [2024-07-15 12:38:46.981835] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:14.385 [2024-07-15 12:38:46.981927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73374 ] 00:14:14.643 [2024-07-15 12:38:47.116894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.643 [2024-07-15 12:38:47.260162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.643 [2024-07-15 12:38:47.318773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:15.579 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.579 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:15.579 12:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ajMHV1bz32 00:14:15.579 [2024-07-15 12:38:48.210862] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.579 [2024-07-15 12:38:48.211620] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:15.579 [2024-07-15 12:38:48.220896] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:15.579 [2024-07-15 12:38:48.221625] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:15.579 [2024-07-15 12:38:48.222194] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:15.579 [2024-07-15 12:38:48.222641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fe1f0 (107): Transport endpoint is not connected 00:14:15.579 [2024-07-15 12:38:48.223628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fe1f0 (9): Bad file descriptor 00:14:15.579 [2024-07-15 12:38:48.224624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:15.579 request: 00:14:15.579 { 00:14:15.579 "name": "TLSTEST", 00:14:15.579 "trtype": "tcp", 00:14:15.579 "traddr": "10.0.0.2", 00:14:15.579 "adrfam": "ipv4", 00:14:15.579 "trsvcid": "4420", 00:14:15.579 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:15.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:15.579 "prchk_reftag": false, 00:14:15.579 "prchk_guard": false, 00:14:15.579 "hdgst": false, 00:14:15.579 "ddgst": false, 00:14:15.579 "psk": "/tmp/tmp.ajMHV1bz32", 00:14:15.579 "method": "bdev_nvme_attach_controller", 00:14:15.579 "req_id": 1 00:14:15.579 } 00:14:15.579 Got JSON-RPC error response 00:14:15.579 response: 00:14:15.579 { 00:14:15.579 "code": -5, 00:14:15.579 "message": "Input/output error" 00:14:15.579 } 00:14:15.579 [2024-07-15 12:38:48.225040] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:15.579 [2024-07-15 12:38:48.225067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:15.579 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73374 00:14:15.579 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73374 ']' 00:14:15.579 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73374 00:14:15.579 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:15.579 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.579 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73374 00:14:15.837 killing process with pid 73374 00:14:15.838 Received shutdown signal, test time was about 10.000000 seconds 00:14:15.838 00:14:15.838 Latency(us) 00:14:15.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.838 =================================================================================================================== 00:14:15.838 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73374' 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73374 00:14:15.838 [2024-07-15 12:38:48.274844] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73374 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:15.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73402 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73402 /var/tmp/bdevperf.sock 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73402 ']' 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.838 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.097 [2024-07-15 12:38:48.541725] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:16.097 [2024-07-15 12:38:48.542216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73402 ] 00:14:16.097 [2024-07-15 12:38:48.678189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.355 [2024-07-15 12:38:48.803276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.355 [2024-07-15 12:38:48.860042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:16.921 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.921 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:16.921 12:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:17.179 [2024-07-15 12:38:49.801269] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:17.179 [2024-07-15 12:38:49.804896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110ec00 (9): Bad file descriptor 00:14:17.179 [2024-07-15 12:38:49.805908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:17.179 [2024-07-15 12:38:49.805941] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:17.179 [2024-07-15 12:38:49.805959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:17.179 request: 00:14:17.179 { 00:14:17.179 "name": "TLSTEST", 00:14:17.179 "trtype": "tcp", 00:14:17.179 "traddr": "10.0.0.2", 00:14:17.179 "adrfam": "ipv4", 00:14:17.179 "trsvcid": "4420", 00:14:17.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:17.179 "prchk_reftag": false, 00:14:17.179 "prchk_guard": false, 00:14:17.179 "hdgst": false, 00:14:17.179 "ddgst": false, 00:14:17.179 "method": "bdev_nvme_attach_controller", 00:14:17.179 "req_id": 1 00:14:17.179 } 00:14:17.179 Got JSON-RPC error response 00:14:17.179 response: 00:14:17.179 { 00:14:17.179 "code": -5, 00:14:17.179 "message": "Input/output error" 00:14:17.179 } 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73402 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73402 ']' 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73402 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73402 00:14:17.179 killing process with pid 73402 00:14:17.179 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.179 00:14:17.179 Latency(us) 00:14:17.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.179 =================================================================================================================== 00:14:17.179 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73402' 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73402 00:14:17.179 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73402 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72951 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72951 ']' 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72951 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72951 00:14:17.437 killing process with pid 72951 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72951' 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72951 00:14:17.437 [2024-07-15 12:38:50.106713] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:17.437 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72951 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.G8MA7X6qfn 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.G8MA7X6qfn 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73445 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73445 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73445 ']' 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.003 12:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.003 [2024-07-15 12:38:50.576107] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:18.003 [2024-07-15 12:38:50.576215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.262 [2024-07-15 12:38:50.712030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.262 [2024-07-15 12:38:50.867107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.262 [2024-07-15 12:38:50.867218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.262 [2024-07-15 12:38:50.867232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.262 [2024-07-15 12:38:50.867242] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.262 [2024-07-15 12:38:50.867250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.262 [2024-07-15 12:38:50.867286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.598 [2024-07-15 12:38:50.946404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:18.856 12:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.856 12:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:18.856 12:38:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.856 12:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.856 12:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.115 12:38:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.115 12:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.G8MA7X6qfn 00:14:19.115 12:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G8MA7X6qfn 00:14:19.115 12:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:19.373 [2024-07-15 12:38:51.819494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.373 12:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:19.631 12:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:19.631 [2024-07-15 12:38:52.295538] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.631 [2024-07-15 12:38:52.295956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.888 12:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:19.888 malloc0 00:14:19.888 12:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:20.146 12:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8MA7X6qfn 00:14:20.405 [2024-07-15 12:38:53.007522] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G8MA7X6qfn 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G8MA7X6qfn' 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73494 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73494 /var/tmp/bdevperf.sock 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73494 ']' 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.405 12:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.405 [2024-07-15 12:38:53.070320] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:20.405 [2024-07-15 12:38:53.070621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73494 ] 00:14:20.664 [2024-07-15 12:38:53.205180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.664 [2024-07-15 12:38:53.319449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.922 [2024-07-15 12:38:53.373849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:21.487 12:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.487 12:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:21.487 12:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8MA7X6qfn 00:14:21.746 [2024-07-15 12:38:54.289797] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:21.746 [2024-07-15 12:38:54.289941] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:21.746 TLSTESTn1 00:14:21.746 12:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:22.003 Running I/O for 10 seconds... 00:14:31.979 00:14:31.979 Latency(us) 00:14:31.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.979 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:31.979 Verification LBA range: start 0x0 length 0x2000 00:14:31.979 TLSTESTn1 : 10.03 2939.99 11.48 0.00 0.00 43426.55 7626.01 26452.71 00:14:31.980 =================================================================================================================== 00:14:31.980 Total : 2939.99 11.48 0.00 0.00 43426.55 7626.01 26452.71 00:14:31.980 0 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73494 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73494 ']' 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73494 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73494 00:14:31.980 killing process with pid 73494 00:14:31.980 Received shutdown signal, test time was about 10.000000 seconds 00:14:31.980 00:14:31.980 Latency(us) 00:14:31.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.980 =================================================================================================================== 00:14:31.980 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73494' 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73494 00:14:31.980 [2024-07-15 12:39:04.562535] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:31.980 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73494 00:14:32.238 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.G8MA7X6qfn 00:14:32.238 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G8MA7X6qfn 00:14:32.238 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:32.238 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G8MA7X6qfn 00:14:32.238 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:32.238 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.238 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:32.238 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G8MA7X6qfn 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G8MA7X6qfn' 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73629 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73629 /var/tmp/bdevperf.sock 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73629 ']' 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.239 12:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.239 [2024-07-15 12:39:04.871770] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:32.239 [2024-07-15 12:39:04.872189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73629 ] 00:14:32.497 [2024-07-15 12:39:05.018796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.497 [2024-07-15 12:39:05.167626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.755 [2024-07-15 12:39:05.245893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:33.322 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.322 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:33.322 12:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8MA7X6qfn 00:14:33.581 [2024-07-15 12:39:06.132023] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.581 [2024-07-15 12:39:06.132141] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:33.581 [2024-07-15 12:39:06.132154] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.G8MA7X6qfn 00:14:33.581 request: 00:14:33.581 { 00:14:33.581 "name": "TLSTEST", 00:14:33.581 "trtype": "tcp", 00:14:33.581 "traddr": "10.0.0.2", 00:14:33.581 "adrfam": "ipv4", 00:14:33.581 "trsvcid": "4420", 00:14:33.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.581 "prchk_reftag": false, 00:14:33.581 "prchk_guard": false, 00:14:33.581 "hdgst": false, 00:14:33.581 "ddgst": false, 00:14:33.581 "psk": "/tmp/tmp.G8MA7X6qfn", 00:14:33.581 "method": "bdev_nvme_attach_controller", 00:14:33.581 "req_id": 1 00:14:33.581 } 00:14:33.581 Got JSON-RPC error response 00:14:33.581 response: 00:14:33.581 { 00:14:33.581 "code": -1, 00:14:33.581 "message": "Operation not permitted" 00:14:33.581 } 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73629 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73629 ']' 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73629 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73629 00:14:33.581 killing process with pid 73629 00:14:33.581 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.581 00:14:33.581 Latency(us) 00:14:33.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.581 =================================================================================================================== 00:14:33.581 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73629' 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73629 00:14:33.581 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73629 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73445 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73445 ']' 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73445 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:33.840 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73445 00:14:34.099 killing process with pid 73445 00:14:34.099 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:34.099 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:34.099 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73445' 00:14:34.099 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73445 00:14:34.099 [2024-07-15 12:39:06.539552] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:34.099 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73445 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73667 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73667 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73667 ']' 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.358 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.358 [2024-07-15 12:39:06.961997] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:34.358 [2024-07-15 12:39:06.962539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.616 [2024-07-15 12:39:07.098575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.617 [2024-07-15 12:39:07.259638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.617 [2024-07-15 12:39:07.260122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.617 [2024-07-15 12:39:07.260270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.617 [2024-07-15 12:39:07.260294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.617 [2024-07-15 12:39:07.260302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.617 [2024-07-15 12:39:07.260338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.875 [2024-07-15 12:39:07.341960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:35.442 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.442 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:35.442 12:39:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:35.442 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.442 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.G8MA7X6qfn 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.G8MA7X6qfn 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.G8MA7X6qfn 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G8MA7X6qfn 00:14:35.442 12:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:35.701 [2024-07-15 12:39:08.354257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.701 12:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:36.268 12:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:36.268 [2024-07-15 12:39:08.938362] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:36.268 [2024-07-15 12:39:08.938697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.527 12:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:36.785 malloc0 00:14:36.785 12:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:37.043 12:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8MA7X6qfn 00:14:37.302 [2024-07-15 12:39:09.762493] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:37.302 [2024-07-15 12:39:09.762589] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:37.302 [2024-07-15 12:39:09.762641] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:37.302 request: 00:14:37.302 { 00:14:37.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.302 "host": "nqn.2016-06.io.spdk:host1", 00:14:37.302 "psk": "/tmp/tmp.G8MA7X6qfn", 00:14:37.302 "method": "nvmf_subsystem_add_host", 00:14:37.302 "req_id": 1 00:14:37.302 } 00:14:37.302 Got JSON-RPC error response 00:14:37.302 response: 00:14:37.302 { 00:14:37.302 "code": -32603, 00:14:37.302 "message": "Internal error" 00:14:37.302 } 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73667 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73667 ']' 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73667 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73667 00:14:37.302 killing process with pid 73667 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73667' 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73667 00:14:37.302 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73667 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.G8MA7X6qfn 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73735 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73735 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73735 ']' 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.560 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.560 [2024-07-15 12:39:10.228412] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:37.560 [2024-07-15 12:39:10.228505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.818 [2024-07-15 12:39:10.368299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.077 [2024-07-15 12:39:10.523579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.077 [2024-07-15 12:39:10.524048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.077 [2024-07-15 12:39:10.524217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.077 [2024-07-15 12:39:10.524503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.077 [2024-07-15 12:39:10.524547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.077 [2024-07-15 12:39:10.524613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.077 [2024-07-15 12:39:10.603468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:38.643 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.643 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:38.643 12:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.643 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.643 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.643 12:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.643 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.G8MA7X6qfn 00:14:38.643 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G8MA7X6qfn 00:14:38.643 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:38.900 [2024-07-15 12:39:11.497325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.900 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:39.158 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:39.445 [2024-07-15 12:39:11.965339] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:39.445 [2024-07-15 12:39:11.965670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.445 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:39.711 malloc0 00:14:39.711 12:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:39.970 12:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8MA7X6qfn 00:14:40.229 [2024-07-15 12:39:12.677460] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:40.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73784 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73784 /var/tmp/bdevperf.sock 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73784 ']' 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.229 12:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.229 [2024-07-15 12:39:12.743361] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:40.229 [2024-07-15 12:39:12.743669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73784 ] 00:14:40.229 [2024-07-15 12:39:12.880289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.487 [2024-07-15 12:39:13.048411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.487 [2024-07-15 12:39:13.125250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.053 12:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.053 12:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:41.053 12:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8MA7X6qfn 00:14:41.311 [2024-07-15 12:39:13.923248] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:41.311 [2024-07-15 12:39:13.923403] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:41.569 TLSTESTn1 00:14:41.569 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:41.828 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:41.828 "subsystems": [ 00:14:41.828 { 00:14:41.828 "subsystem": "keyring", 00:14:41.828 "config": [] 00:14:41.828 }, 00:14:41.828 { 00:14:41.828 "subsystem": "iobuf", 00:14:41.828 "config": [ 00:14:41.828 { 00:14:41.828 "method": "iobuf_set_options", 00:14:41.828 "params": { 00:14:41.828 "small_pool_count": 8192, 00:14:41.828 "large_pool_count": 1024, 00:14:41.828 "small_bufsize": 8192, 00:14:41.828 "large_bufsize": 135168 00:14:41.828 } 00:14:41.828 } 00:14:41.828 ] 00:14:41.828 }, 00:14:41.828 { 00:14:41.828 "subsystem": "sock", 00:14:41.828 "config": [ 00:14:41.828 { 00:14:41.828 "method": "sock_set_default_impl", 00:14:41.828 "params": { 00:14:41.828 "impl_name": "uring" 00:14:41.828 } 00:14:41.828 }, 00:14:41.828 { 00:14:41.828 "method": "sock_impl_set_options", 00:14:41.828 "params": { 00:14:41.828 "impl_name": "ssl", 00:14:41.828 "recv_buf_size": 4096, 00:14:41.828 "send_buf_size": 4096, 00:14:41.828 "enable_recv_pipe": true, 00:14:41.828 "enable_quickack": false, 00:14:41.828 "enable_placement_id": 0, 00:14:41.828 "enable_zerocopy_send_server": true, 00:14:41.828 "enable_zerocopy_send_client": false, 00:14:41.828 "zerocopy_threshold": 0, 00:14:41.828 "tls_version": 0, 00:14:41.828 "enable_ktls": false 00:14:41.828 } 00:14:41.828 }, 00:14:41.828 { 00:14:41.828 "method": "sock_impl_set_options", 00:14:41.828 "params": { 00:14:41.828 "impl_name": "posix", 00:14:41.828 "recv_buf_size": 2097152, 00:14:41.828 "send_buf_size": 2097152, 00:14:41.828 "enable_recv_pipe": true, 00:14:41.828 "enable_quickack": false, 00:14:41.828 "enable_placement_id": 0, 00:14:41.828 "enable_zerocopy_send_server": true, 00:14:41.828 "enable_zerocopy_send_client": false, 00:14:41.828 "zerocopy_threshold": 0, 00:14:41.828 "tls_version": 0, 00:14:41.828 "enable_ktls": false 00:14:41.828 } 00:14:41.828 }, 00:14:41.828 { 00:14:41.828 "method": "sock_impl_set_options", 00:14:41.828 "params": { 00:14:41.828 "impl_name": "uring", 00:14:41.828 "recv_buf_size": 2097152, 00:14:41.828 "send_buf_size": 2097152, 00:14:41.828 "enable_recv_pipe": true, 00:14:41.828 "enable_quickack": false, 00:14:41.828 "enable_placement_id": 0, 00:14:41.828 "enable_zerocopy_send_server": false, 00:14:41.828 "enable_zerocopy_send_client": false, 00:14:41.828 "zerocopy_threshold": 0, 00:14:41.828 "tls_version": 0, 00:14:41.829 "enable_ktls": false 00:14:41.829 } 00:14:41.829 } 00:14:41.829 ] 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "subsystem": "vmd", 00:14:41.829 "config": [] 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "subsystem": "accel", 00:14:41.829 "config": [ 00:14:41.829 { 00:14:41.829 "method": "accel_set_options", 00:14:41.829 "params": { 00:14:41.829 "small_cache_size": 128, 00:14:41.829 "large_cache_size": 16, 00:14:41.829 "task_count": 2048, 00:14:41.829 "sequence_count": 2048, 00:14:41.829 "buf_count": 2048 00:14:41.829 } 00:14:41.829 } 00:14:41.829 ] 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "subsystem": "bdev", 00:14:41.829 "config": [ 00:14:41.829 { 00:14:41.829 "method": "bdev_set_options", 00:14:41.829 "params": { 00:14:41.829 "bdev_io_pool_size": 65535, 00:14:41.829 "bdev_io_cache_size": 256, 00:14:41.829 "bdev_auto_examine": true, 00:14:41.829 "iobuf_small_cache_size": 128, 00:14:41.829 "iobuf_large_cache_size": 16 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "bdev_raid_set_options", 00:14:41.829 "params": { 00:14:41.829 "process_window_size_kb": 1024 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "bdev_iscsi_set_options", 00:14:41.829 "params": { 00:14:41.829 "timeout_sec": 30 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "bdev_nvme_set_options", 00:14:41.829 "params": { 00:14:41.829 "action_on_timeout": "none", 00:14:41.829 "timeout_us": 0, 00:14:41.829 "timeout_admin_us": 0, 00:14:41.829 "keep_alive_timeout_ms": 10000, 00:14:41.829 "arbitration_burst": 0, 00:14:41.829 "low_priority_weight": 0, 00:14:41.829 "medium_priority_weight": 0, 00:14:41.829 "high_priority_weight": 0, 00:14:41.829 "nvme_adminq_poll_period_us": 10000, 00:14:41.829 "nvme_ioq_poll_period_us": 0, 00:14:41.829 "io_queue_requests": 0, 00:14:41.829 "delay_cmd_submit": true, 00:14:41.829 "transport_retry_count": 4, 00:14:41.829 "bdev_retry_count": 3, 00:14:41.829 "transport_ack_timeout": 0, 00:14:41.829 "ctrlr_loss_timeout_sec": 0, 00:14:41.829 "reconnect_delay_sec": 0, 00:14:41.829 "fast_io_fail_timeout_sec": 0, 00:14:41.829 "disable_auto_failback": false, 00:14:41.829 "generate_uuids": false, 00:14:41.829 "transport_tos": 0, 00:14:41.829 "nvme_error_stat": false, 00:14:41.829 "rdma_srq_size": 0, 00:14:41.829 "io_path_stat": false, 00:14:41.829 "allow_accel_sequence": false, 00:14:41.829 "rdma_max_cq_size": 0, 00:14:41.829 "rdma_cm_event_timeout_ms": 0, 00:14:41.829 "dhchap_digests": [ 00:14:41.829 "sha256", 00:14:41.829 "sha384", 00:14:41.829 "sha512" 00:14:41.829 ], 00:14:41.829 "dhchap_dhgroups": [ 00:14:41.829 "null", 00:14:41.829 "ffdhe2048", 00:14:41.829 "ffdhe3072", 00:14:41.829 "ffdhe4096", 00:14:41.829 "ffdhe6144", 00:14:41.829 "ffdhe8192" 00:14:41.829 ] 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "bdev_nvme_set_hotplug", 00:14:41.829 "params": { 00:14:41.829 "period_us": 100000, 00:14:41.829 "enable": false 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "bdev_malloc_create", 00:14:41.829 "params": { 00:14:41.829 "name": "malloc0", 00:14:41.829 "num_blocks": 8192, 00:14:41.829 "block_size": 4096, 00:14:41.829 "physical_block_size": 4096, 00:14:41.829 "uuid": "d0791052-a63a-4bea-b447-eb611b3bbd00", 00:14:41.829 "optimal_io_boundary": 0 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "bdev_wait_for_examine" 00:14:41.829 } 00:14:41.829 ] 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "subsystem": "nbd", 00:14:41.829 "config": [] 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "subsystem": "scheduler", 00:14:41.829 "config": [ 00:14:41.829 { 00:14:41.829 "method": "framework_set_scheduler", 00:14:41.829 "params": { 00:14:41.829 "name": "static" 00:14:41.829 } 00:14:41.829 } 00:14:41.829 ] 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "subsystem": "nvmf", 00:14:41.829 "config": [ 00:14:41.829 { 00:14:41.829 "method": "nvmf_set_config", 00:14:41.829 "params": { 00:14:41.829 "discovery_filter": "match_any", 00:14:41.829 "admin_cmd_passthru": { 00:14:41.829 "identify_ctrlr": false 00:14:41.829 } 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "nvmf_set_max_subsystems", 00:14:41.829 "params": { 00:14:41.829 "max_subsystems": 1024 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "nvmf_set_crdt", 00:14:41.829 "params": { 00:14:41.829 "crdt1": 0, 00:14:41.829 "crdt2": 0, 00:14:41.829 "crdt3": 0 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "nvmf_create_transport", 00:14:41.829 "params": { 00:14:41.829 "trtype": "TCP", 00:14:41.829 "max_queue_depth": 128, 00:14:41.829 "max_io_qpairs_per_ctrlr": 127, 00:14:41.829 "in_capsule_data_size": 4096, 00:14:41.829 "max_io_size": 131072, 00:14:41.829 "io_unit_size": 131072, 00:14:41.829 "max_aq_depth": 128, 00:14:41.829 "num_shared_buffers": 511, 00:14:41.829 "buf_cache_size": 4294967295, 00:14:41.829 "dif_insert_or_strip": false, 00:14:41.829 "zcopy": false, 00:14:41.829 "c2h_success": false, 00:14:41.829 "sock_priority": 0, 00:14:41.829 "abort_timeout_sec": 1, 00:14:41.829 "ack_timeout": 0, 00:14:41.829 "data_wr_pool_size": 0 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "nvmf_create_subsystem", 00:14:41.829 "params": { 00:14:41.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.829 "allow_any_host": false, 00:14:41.829 "serial_number": "SPDK00000000000001", 00:14:41.829 "model_number": "SPDK bdev Controller", 00:14:41.829 "max_namespaces": 10, 00:14:41.829 "min_cntlid": 1, 00:14:41.829 "max_cntlid": 65519, 00:14:41.829 "ana_reporting": false 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "nvmf_subsystem_add_host", 00:14:41.829 "params": { 00:14:41.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.829 "host": "nqn.2016-06.io.spdk:host1", 00:14:41.829 "psk": "/tmp/tmp.G8MA7X6qfn" 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "nvmf_subsystem_add_ns", 00:14:41.829 "params": { 00:14:41.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.829 "namespace": { 00:14:41.829 "nsid": 1, 00:14:41.829 "bdev_name": "malloc0", 00:14:41.829 "nguid": "D0791052A63A4BEAB447EB611B3BBD00", 00:14:41.829 "uuid": "d0791052-a63a-4bea-b447-eb611b3bbd00", 00:14:41.829 "no_auto_visible": false 00:14:41.829 } 00:14:41.829 } 00:14:41.829 }, 00:14:41.829 { 00:14:41.829 "method": "nvmf_subsystem_add_listener", 00:14:41.829 "params": { 00:14:41.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.829 "listen_address": { 00:14:41.829 "trtype": "TCP", 00:14:41.829 "adrfam": "IPv4", 00:14:41.829 "traddr": "10.0.0.2", 00:14:41.829 "trsvcid": "4420" 00:14:41.829 }, 00:14:41.829 "secure_channel": true 00:14:41.829 } 00:14:41.829 } 00:14:41.829 ] 00:14:41.829 } 00:14:41.829 ] 00:14:41.830 }' 00:14:41.830 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:42.105 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:42.105 "subsystems": [ 00:14:42.105 { 00:14:42.105 "subsystem": "keyring", 00:14:42.105 "config": [] 00:14:42.105 }, 00:14:42.105 { 00:14:42.105 "subsystem": "iobuf", 00:14:42.105 "config": [ 00:14:42.105 { 00:14:42.105 "method": "iobuf_set_options", 00:14:42.105 "params": { 00:14:42.105 "small_pool_count": 8192, 00:14:42.105 "large_pool_count": 1024, 00:14:42.105 "small_bufsize": 8192, 00:14:42.105 "large_bufsize": 135168 00:14:42.105 } 00:14:42.105 } 00:14:42.106 ] 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "subsystem": "sock", 00:14:42.106 "config": [ 00:14:42.106 { 00:14:42.106 "method": "sock_set_default_impl", 00:14:42.106 "params": { 00:14:42.106 "impl_name": "uring" 00:14:42.106 } 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "method": "sock_impl_set_options", 00:14:42.106 "params": { 00:14:42.106 "impl_name": "ssl", 00:14:42.106 "recv_buf_size": 4096, 00:14:42.106 "send_buf_size": 4096, 00:14:42.106 "enable_recv_pipe": true, 00:14:42.106 "enable_quickack": false, 00:14:42.106 "enable_placement_id": 0, 00:14:42.106 "enable_zerocopy_send_server": true, 00:14:42.106 "enable_zerocopy_send_client": false, 00:14:42.106 "zerocopy_threshold": 0, 00:14:42.106 "tls_version": 0, 00:14:42.106 "enable_ktls": false 00:14:42.106 } 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "method": "sock_impl_set_options", 00:14:42.106 "params": { 00:14:42.106 "impl_name": "posix", 00:14:42.106 "recv_buf_size": 2097152, 00:14:42.106 "send_buf_size": 2097152, 00:14:42.106 "enable_recv_pipe": true, 00:14:42.106 "enable_quickack": false, 00:14:42.106 "enable_placement_id": 0, 00:14:42.106 "enable_zerocopy_send_server": true, 00:14:42.106 "enable_zerocopy_send_client": false, 00:14:42.106 "zerocopy_threshold": 0, 00:14:42.106 "tls_version": 0, 00:14:42.106 "enable_ktls": false 00:14:42.106 } 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "method": "sock_impl_set_options", 00:14:42.106 "params": { 00:14:42.106 "impl_name": "uring", 00:14:42.106 "recv_buf_size": 2097152, 00:14:42.106 "send_buf_size": 2097152, 00:14:42.106 "enable_recv_pipe": true, 00:14:42.106 "enable_quickack": false, 00:14:42.106 "enable_placement_id": 0, 00:14:42.106 "enable_zerocopy_send_server": false, 00:14:42.106 "enable_zerocopy_send_client": false, 00:14:42.106 "zerocopy_threshold": 0, 00:14:42.106 "tls_version": 0, 00:14:42.106 "enable_ktls": false 00:14:42.106 } 00:14:42.106 } 00:14:42.106 ] 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "subsystem": "vmd", 00:14:42.106 "config": [] 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "subsystem": "accel", 00:14:42.106 "config": [ 00:14:42.106 { 00:14:42.106 "method": "accel_set_options", 00:14:42.106 "params": { 00:14:42.106 "small_cache_size": 128, 00:14:42.106 "large_cache_size": 16, 00:14:42.106 "task_count": 2048, 00:14:42.106 "sequence_count": 2048, 00:14:42.106 "buf_count": 2048 00:14:42.106 } 00:14:42.106 } 00:14:42.106 ] 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "subsystem": "bdev", 00:14:42.106 "config": [ 00:14:42.106 { 00:14:42.106 "method": "bdev_set_options", 00:14:42.106 "params": { 00:14:42.106 "bdev_io_pool_size": 65535, 00:14:42.106 "bdev_io_cache_size": 256, 00:14:42.106 "bdev_auto_examine": true, 00:14:42.106 "iobuf_small_cache_size": 128, 00:14:42.106 "iobuf_large_cache_size": 16 00:14:42.106 } 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "method": "bdev_raid_set_options", 00:14:42.106 "params": { 00:14:42.106 "process_window_size_kb": 1024 00:14:42.106 } 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "method": "bdev_iscsi_set_options", 00:14:42.106 "params": { 00:14:42.106 "timeout_sec": 30 00:14:42.106 } 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "method": "bdev_nvme_set_options", 00:14:42.106 "params": { 00:14:42.106 "action_on_timeout": "none", 00:14:42.106 "timeout_us": 0, 00:14:42.106 "timeout_admin_us": 0, 00:14:42.106 "keep_alive_timeout_ms": 10000, 00:14:42.106 "arbitration_burst": 0, 00:14:42.106 "low_priority_weight": 0, 00:14:42.106 "medium_priority_weight": 0, 00:14:42.106 "high_priority_weight": 0, 00:14:42.106 "nvme_adminq_poll_period_us": 10000, 00:14:42.106 "nvme_ioq_poll_period_us": 0, 00:14:42.106 "io_queue_requests": 512, 00:14:42.106 "delay_cmd_submit": true, 00:14:42.106 "transport_retry_count": 4, 00:14:42.106 "bdev_retry_count": 3, 00:14:42.106 "transport_ack_timeout": 0, 00:14:42.106 "ctrlr_loss_timeout_sec": 0, 00:14:42.106 "reconnect_delay_sec": 0, 00:14:42.106 "fast_io_fail_timeout_sec": 0, 00:14:42.106 "disable_auto_failback": false, 00:14:42.106 "generate_uuids": false, 00:14:42.106 "transport_tos": 0, 00:14:42.106 "nvme_error_stat": false, 00:14:42.106 "rdma_srq_size": 0, 00:14:42.106 "io_path_stat": false, 00:14:42.106 "allow_accel_sequence": false, 00:14:42.106 "rdma_max_cq_size": 0, 00:14:42.106 "rdma_cm_event_timeout_ms": 0, 00:14:42.106 "dhchap_digests": [ 00:14:42.106 "sha256", 00:14:42.106 "sha384", 00:14:42.106 "sha512" 00:14:42.106 ], 00:14:42.106 "dhchap_dhgroups": [ 00:14:42.106 "null", 00:14:42.106 "ffdhe2048", 00:14:42.106 "ffdhe3072", 00:14:42.106 "ffdhe4096", 00:14:42.106 "ffdhe6144", 00:14:42.106 "ffdhe8192" 00:14:42.106 ] 00:14:42.106 } 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "method": "bdev_nvme_attach_controller", 00:14:42.106 "params": { 00:14:42.106 "name": "TLSTEST", 00:14:42.106 "trtype": "TCP", 00:14:42.106 "adrfam": "IPv4", 00:14:42.106 "traddr": "10.0.0.2", 00:14:42.106 "trsvcid": "4420", 00:14:42.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.106 "prchk_reftag": false, 00:14:42.106 "prchk_guard": false, 00:14:42.106 "ctrlr_loss_timeout_sec": 0, 00:14:42.106 "reconnect_delay_sec": 0, 00:14:42.106 "fast_io_fail_timeout_sec": 0, 00:14:42.106 "psk": "/tmp/tmp.G8MA7X6qfn", 00:14:42.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.106 "hdgst": false, 00:14:42.106 "ddgst": false 00:14:42.106 } 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "method": "bdev_nvme_set_hotplug", 00:14:42.106 "params": { 00:14:42.106 "period_us": 100000, 00:14:42.106 "enable": false 00:14:42.106 } 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "method": "bdev_wait_for_examine" 00:14:42.106 } 00:14:42.106 ] 00:14:42.106 }, 00:14:42.106 { 00:14:42.106 "subsystem": "nbd", 00:14:42.106 "config": [] 00:14:42.106 } 00:14:42.106 ] 00:14:42.106 }' 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73784 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73784 ']' 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73784 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73784 00:14:42.106 killing process with pid 73784 00:14:42.106 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.106 00:14:42.106 Latency(us) 00:14:42.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.106 =================================================================================================================== 00:14:42.106 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73784' 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73784 00:14:42.106 [2024-07-15 12:39:14.709003] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:42.106 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73784 00:14:42.364 12:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73735 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73735 ']' 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73735 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73735 00:14:42.365 killing process with pid 73735 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73735' 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73735 00:14:42.365 [2024-07-15 12:39:15.036621] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:42.365 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73735 00:14:42.932 12:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:42.932 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:42.932 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.932 12:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:42.932 "subsystems": [ 00:14:42.932 { 00:14:42.932 "subsystem": "keyring", 00:14:42.932 "config": [] 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "subsystem": "iobuf", 00:14:42.932 "config": [ 00:14:42.932 { 00:14:42.932 "method": "iobuf_set_options", 00:14:42.932 "params": { 00:14:42.932 "small_pool_count": 8192, 00:14:42.932 "large_pool_count": 1024, 00:14:42.932 "small_bufsize": 8192, 00:14:42.932 "large_bufsize": 135168 00:14:42.932 } 00:14:42.932 } 00:14:42.932 ] 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "subsystem": "sock", 00:14:42.932 "config": [ 00:14:42.932 { 00:14:42.932 "method": "sock_set_default_impl", 00:14:42.932 "params": { 00:14:42.932 "impl_name": "uring" 00:14:42.932 } 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "method": "sock_impl_set_options", 00:14:42.932 "params": { 00:14:42.932 "impl_name": "ssl", 00:14:42.932 "recv_buf_size": 4096, 00:14:42.932 "send_buf_size": 4096, 00:14:42.932 "enable_recv_pipe": true, 00:14:42.932 "enable_quickack": false, 00:14:42.932 "enable_placement_id": 0, 00:14:42.932 "enable_zerocopy_send_server": true, 00:14:42.932 "enable_zerocopy_send_client": false, 00:14:42.932 "zerocopy_threshold": 0, 00:14:42.932 "tls_version": 0, 00:14:42.932 "enable_ktls": false 00:14:42.932 } 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "method": "sock_impl_set_options", 00:14:42.932 "params": { 00:14:42.932 "impl_name": "posix", 00:14:42.932 "recv_buf_size": 2097152, 00:14:42.932 "send_buf_size": 2097152, 00:14:42.932 "enable_recv_pipe": true, 00:14:42.932 "enable_quickack": false, 00:14:42.932 "enable_placement_id": 0, 00:14:42.932 "enable_zerocopy_send_server": true, 00:14:42.932 "enable_zerocopy_send_client": false, 00:14:42.932 "zerocopy_threshold": 0, 00:14:42.932 "tls_version": 0, 00:14:42.932 "enable_ktls": false 00:14:42.932 } 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "method": "sock_impl_set_options", 00:14:42.932 "params": { 00:14:42.932 "impl_name": "uring", 00:14:42.932 "recv_buf_size": 2097152, 00:14:42.932 "send_buf_size": 2097152, 00:14:42.932 "enable_recv_pipe": true, 00:14:42.932 "enable_quickack": false, 00:14:42.932 "enable_placement_id": 0, 00:14:42.932 "enable_zerocopy_send_server": false, 00:14:42.932 "enable_zerocopy_send_client": false, 00:14:42.932 "zerocopy_threshold": 0, 00:14:42.932 "tls_version": 0, 00:14:42.932 "enable_ktls": false 00:14:42.932 } 00:14:42.932 } 00:14:42.932 ] 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "subsystem": "vmd", 00:14:42.932 "config": [] 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "subsystem": "accel", 00:14:42.932 "config": [ 00:14:42.932 { 00:14:42.932 "method": "accel_set_options", 00:14:42.932 "params": { 00:14:42.932 "small_cache_size": 128, 00:14:42.932 "large_cache_size": 16, 00:14:42.932 "task_count": 2048, 00:14:42.932 "sequence_count": 2048, 00:14:42.932 "buf_count": 2048 00:14:42.932 } 00:14:42.932 } 00:14:42.932 ] 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "subsystem": "bdev", 00:14:42.932 "config": [ 00:14:42.932 { 00:14:42.932 "method": "bdev_set_options", 00:14:42.932 "params": { 00:14:42.932 "bdev_io_pool_size": 65535, 00:14:42.932 "bdev_io_cache_size": 256, 00:14:42.932 "bdev_auto_examine": true, 00:14:42.932 "iobuf_small_cache_size": 128, 00:14:42.932 "iobuf_large_cache_size": 16 00:14:42.932 } 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "method": "bdev_raid_set_options", 00:14:42.932 "params": { 00:14:42.932 "process_window_size_kb": 1024 00:14:42.932 } 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "method": "bdev_iscsi_set_options", 00:14:42.932 "params": { 00:14:42.932 "timeout_sec": 30 00:14:42.932 } 00:14:42.932 }, 00:14:42.932 { 00:14:42.932 "method": "bdev_nvme_set_options", 00:14:42.932 "params": { 00:14:42.932 "action_on_timeout": "none", 00:14:42.932 "timeout_us": 0, 00:14:42.932 "timeout_admin_us": 0, 00:14:42.933 "keep_alive_timeout_ms": 10000, 00:14:42.933 "arbitration_burst": 0, 00:14:42.933 "low_priority_weight": 0, 00:14:42.933 "medium_priority_weight": 0, 00:14:42.933 "high_priority_weight": 0, 00:14:42.933 "nvme_adminq_poll_period_us": 10000, 00:14:42.933 "nvme_ioq_poll_period_us": 0, 00:14:42.933 "io_queue_requests": 0, 00:14:42.933 "delay_cmd_submit": true, 00:14:42.933 "transport_retry_count": 4, 00:14:42.933 "bdev_retry_count": 3, 00:14:42.933 "transport_ack_timeout": 0, 00:14:42.933 "ctrlr_loss_timeout_sec": 0, 00:14:42.933 "reconnect_delay_sec": 0, 00:14:42.933 "fast_io_fail_timeout_sec": 0, 00:14:42.933 "disable_auto_failback": false, 00:14:42.933 "generate_uuids": false, 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.933 "transport_tos": 0, 00:14:42.933 "nvme_error_stat": false, 00:14:42.933 "rdma_srq_size": 0, 00:14:42.933 "io_path_stat": false, 00:14:42.933 "allow_accel_sequence": false, 00:14:42.933 "rdma_max_cq_size": 0, 00:14:42.933 "rdma_cm_event_timeout_ms": 0, 00:14:42.933 "dhchap_digests": [ 00:14:42.933 "sha256", 00:14:42.933 "sha384", 00:14:42.933 "sha512" 00:14:42.933 ], 00:14:42.933 "dhchap_dhgroups": [ 00:14:42.933 "null", 00:14:42.933 "ffdhe2048", 00:14:42.933 "ffdhe3072", 00:14:42.933 "ffdhe4096", 00:14:42.933 "ffdhe6144", 00:14:42.933 "ffdhe8192" 00:14:42.933 ] 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "bdev_nvme_set_hotplug", 00:14:42.933 "params": { 00:14:42.933 "period_us": 100000, 00:14:42.933 "enable": false 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "bdev_malloc_create", 00:14:42.933 "params": { 00:14:42.933 "name": "malloc0", 00:14:42.933 "num_blocks": 8192, 00:14:42.933 "block_size": 4096, 00:14:42.933 "physical_block_size": 4096, 00:14:42.933 "uuid": "d0791052-a63a-4bea-b447-eb611b3bbd00", 00:14:42.933 "optimal_io_boundary": 0 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "bdev_wait_for_examine" 00:14:42.933 } 00:14:42.933 ] 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "subsystem": "nbd", 00:14:42.933 "config": [] 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "subsystem": "scheduler", 00:14:42.933 "config": [ 00:14:42.933 { 00:14:42.933 "method": "framework_set_scheduler", 00:14:42.933 "params": { 00:14:42.933 "name": "static" 00:14:42.933 } 00:14:42.933 } 00:14:42.933 ] 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "subsystem": "nvmf", 00:14:42.933 "config": [ 00:14:42.933 { 00:14:42.933 "method": "nvmf_set_config", 00:14:42.933 "params": { 00:14:42.933 "discovery_filter": "match_any", 00:14:42.933 "admin_cmd_passthru": { 00:14:42.933 "identify_ctrlr": false 00:14:42.933 } 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "nvmf_set_max_subsystems", 00:14:42.933 "params": { 00:14:42.933 "max_subsystems": 1024 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "nvmf_set_crdt", 00:14:42.933 "params": { 00:14:42.933 "crdt1": 0, 00:14:42.933 "crdt2": 0, 00:14:42.933 "crdt3": 0 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "nvmf_create_transport", 00:14:42.933 "params": { 00:14:42.933 "trtype": "TCP", 00:14:42.933 "max_queue_depth": 128, 00:14:42.933 "max_io_qpairs_per_ctrlr": 127, 00:14:42.933 "in_capsule_data_size": 4096, 00:14:42.933 "max_io_size": 131072, 00:14:42.933 "io_unit_size": 131072, 00:14:42.933 "max_aq_depth": 128, 00:14:42.933 "num_shared_buffers": 511, 00:14:42.933 "buf_cache_size": 4294967295, 00:14:42.933 "dif_insert_or_strip": false, 00:14:42.933 "zcopy": false, 00:14:42.933 "c2h_success": false, 00:14:42.933 "sock_priority": 0, 00:14:42.933 "abort_timeout_sec": 1, 00:14:42.933 "ack_timeout": 0, 00:14:42.933 "data_wr_pool_size": 0 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "nvmf_create_subsystem", 00:14:42.933 "params": { 00:14:42.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.933 "allow_any_host": false, 00:14:42.933 "serial_number": "SPDK00000000000001", 00:14:42.933 "model_number": "SPDK bdev Controller", 00:14:42.933 "max_namespaces": 10, 00:14:42.933 "min_cntlid": 1, 00:14:42.933 "max_cntlid": 65519, 00:14:42.933 "ana_reporting": false 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "nvmf_subsystem_add_host", 00:14:42.933 "params": { 00:14:42.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.933 "host": "nqn.2016-06.io.spdk:host1", 00:14:42.933 "psk": "/tmp/tmp.G8MA7X6qfn" 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "nvmf_subsystem_add_ns", 00:14:42.933 "params": { 00:14:42.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.933 "namespace": { 00:14:42.933 "nsid": 1, 00:14:42.933 "bdev_name": "malloc0", 00:14:42.933 "nguid": "D0791052A63A4BEAB447EB611B3BBD00", 00:14:42.933 "uuid": "d0791052-a63a-4bea-b447-eb611b3bbd00", 00:14:42.933 "no_auto_visible": false 00:14:42.933 } 00:14:42.933 } 00:14:42.933 }, 00:14:42.933 { 00:14:42.933 "method": "nvmf_subsystem_add_listener", 00:14:42.933 "params": { 00:14:42.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.933 "listen_address": { 00:14:42.933 "trtype": "TCP", 00:14:42.933 "adrfam": "IPv4", 00:14:42.933 "traddr": "10.0.0.2", 00:14:42.933 "trsvcid": "4420" 00:14:42.933 }, 00:14:42.933 "secure_channel": true 00:14:42.933 } 00:14:42.933 } 00:14:42.933 ] 00:14:42.933 } 00:14:42.933 ] 00:14:42.933 }' 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73833 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73833 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73833 ']' 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.933 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.933 [2024-07-15 12:39:15.403327] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:42.933 [2024-07-15 12:39:15.403407] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.933 [2024-07-15 12:39:15.537897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.192 [2024-07-15 12:39:15.649239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.192 [2024-07-15 12:39:15.649310] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.192 [2024-07-15 12:39:15.649323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.192 [2024-07-15 12:39:15.649331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.192 [2024-07-15 12:39:15.649339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.192 [2024-07-15 12:39:15.649437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.192 [2024-07-15 12:39:15.839779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:43.451 [2024-07-15 12:39:15.925662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.451 [2024-07-15 12:39:15.941601] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:43.451 [2024-07-15 12:39:15.957595] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:43.451 [2024-07-15 12:39:15.957855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.709 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.709 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:43.709 12:39:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.709 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.709 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73865 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73865 /var/tmp/bdevperf.sock 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73865 ']' 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:43.969 12:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:43.969 "subsystems": [ 00:14:43.969 { 00:14:43.969 "subsystem": "keyring", 00:14:43.969 "config": [] 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "subsystem": "iobuf", 00:14:43.969 "config": [ 00:14:43.969 { 00:14:43.969 "method": "iobuf_set_options", 00:14:43.969 "params": { 00:14:43.969 "small_pool_count": 8192, 00:14:43.969 "large_pool_count": 1024, 00:14:43.969 "small_bufsize": 8192, 00:14:43.969 "large_bufsize": 135168 00:14:43.969 } 00:14:43.969 } 00:14:43.969 ] 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "subsystem": "sock", 00:14:43.969 "config": [ 00:14:43.969 { 00:14:43.969 "method": "sock_set_default_impl", 00:14:43.969 "params": { 00:14:43.969 "impl_name": "uring" 00:14:43.969 } 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "method": "sock_impl_set_options", 00:14:43.969 "params": { 00:14:43.969 "impl_name": "ssl", 00:14:43.969 "recv_buf_size": 4096, 00:14:43.969 "send_buf_size": 4096, 00:14:43.969 "enable_recv_pipe": true, 00:14:43.969 "enable_quickack": false, 00:14:43.969 "enable_placement_id": 0, 00:14:43.969 "enable_zerocopy_send_server": true, 00:14:43.969 "enable_zerocopy_send_client": false, 00:14:43.969 "zerocopy_threshold": 0, 00:14:43.969 "tls_version": 0, 00:14:43.969 "enable_ktls": false 00:14:43.969 } 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "method": "sock_impl_set_options", 00:14:43.969 "params": { 00:14:43.969 "impl_name": "posix", 00:14:43.969 "recv_buf_size": 2097152, 00:14:43.969 "send_buf_size": 2097152, 00:14:43.969 "enable_recv_pipe": true, 00:14:43.969 "enable_quickack": false, 00:14:43.969 "enable_placement_id": 0, 00:14:43.969 "enable_zerocopy_send_server": true, 00:14:43.969 "enable_zerocopy_send_client": false, 00:14:43.969 "zerocopy_threshold": 0, 00:14:43.969 "tls_version": 0, 00:14:43.969 "enable_ktls": false 00:14:43.969 } 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "method": "sock_impl_set_options", 00:14:43.969 "params": { 00:14:43.969 "impl_name": "uring", 00:14:43.969 "recv_buf_size": 2097152, 00:14:43.969 "send_buf_size": 2097152, 00:14:43.969 "enable_recv_pipe": true, 00:14:43.969 "enable_quickack": false, 00:14:43.969 "enable_placement_id": 0, 00:14:43.969 "enable_zerocopy_send_server": false, 00:14:43.969 "enable_zerocopy_send_client": false, 00:14:43.969 "zerocopy_threshold": 0, 00:14:43.969 "tls_version": 0, 00:14:43.969 "enable_ktls": false 00:14:43.969 } 00:14:43.969 } 00:14:43.969 ] 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "subsystem": "vmd", 00:14:43.969 "config": [] 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "subsystem": "accel", 00:14:43.969 "config": [ 00:14:43.969 { 00:14:43.969 "method": "accel_set_options", 00:14:43.969 "params": { 00:14:43.969 "small_cache_size": 128, 00:14:43.969 "large_cache_size": 16, 00:14:43.969 "task_count": 2048, 00:14:43.969 "sequence_count": 2048, 00:14:43.969 "buf_count": 2048 00:14:43.969 } 00:14:43.969 } 00:14:43.969 ] 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "subsystem": "bdev", 00:14:43.969 "config": [ 00:14:43.969 { 00:14:43.969 "method": "bdev_set_options", 00:14:43.969 "params": { 00:14:43.969 "bdev_io_pool_size": 65535, 00:14:43.969 "bdev_io_cache_size": 256, 00:14:43.969 "bdev_auto_examine": true, 00:14:43.969 "iobuf_small_cache_size": 128, 00:14:43.969 "iobuf_large_cache_size": 16 00:14:43.969 } 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "method": "bdev_raid_set_options", 00:14:43.969 "params": { 00:14:43.969 "process_window_size_kb": 1024 00:14:43.969 } 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "method": "bdev_iscsi_set_options", 00:14:43.969 "params": { 00:14:43.969 "timeout_sec": 30 00:14:43.969 } 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "method": "bdev_nvme_set_options", 00:14:43.969 "params": { 00:14:43.969 "action_on_timeout": "none", 00:14:43.969 "timeout_us": 0, 00:14:43.969 "timeout_admin_us": 0, 00:14:43.969 "keep_alive_timeout_ms": 10000, 00:14:43.969 "arbitration_burst": 0, 00:14:43.969 "low_priority_weight": 0, 00:14:43.969 "medium_priority_weight": 0, 00:14:43.969 "high_priority_weight": 0, 00:14:43.969 "nvme_adminq_poll_period_us": 10000, 00:14:43.969 "nvme_ioq_poll_period_us": 0, 00:14:43.969 "io_queue_requests": 512, 00:14:43.969 "delay_cmd_submit": true, 00:14:43.969 "transport_retry_count": 4, 00:14:43.969 "bdev_retry_count": 3, 00:14:43.969 "transport_ack_timeout": 0, 00:14:43.969 "ctrlr_loss_timeout_sec": 0, 00:14:43.969 "reconnect_delay_sec": 0, 00:14:43.969 "fast_io_fail_timeout_sec": 0, 00:14:43.969 "disable_auto_failback": false, 00:14:43.969 "generate_uuids": false, 00:14:43.969 "transport_tos": 0, 00:14:43.969 "nvme_error_stat": false, 00:14:43.969 "rdma_srq_size": 0, 00:14:43.969 "io_path_stat": false, 00:14:43.969 "allow_accel_sequence": false, 00:14:43.969 "rdma_max_cq_size": 0, 00:14:43.969 "rdma_cm_event_timeout_ms": 0, 00:14:43.969 "dhchap_digests": [ 00:14:43.969 "sha256", 00:14:43.969 "sha384", 00:14:43.969 "sha512" 00:14:43.969 ], 00:14:43.969 "dhchap_dhgroups": [ 00:14:43.969 "null", 00:14:43.969 "ffdhe2048", 00:14:43.969 "ffdhe3072", 00:14:43.969 "ffdhe4096", 00:14:43.969 "ffdhe6144", 00:14:43.969 "ffdhe8192" 00:14:43.969 ] 00:14:43.969 } 00:14:43.969 }, 00:14:43.969 { 00:14:43.969 "method": "bdev_nvme_attach_controller", 00:14:43.969 "params": { 00:14:43.969 "name": "TLSTEST", 00:14:43.969 "trtype": "TCP", 00:14:43.969 "adrfam": "IPv4", 00:14:43.969 "traddr": "10.0.0.2", 00:14:43.969 "trsvcid": "4420", 00:14:43.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.969 "prchk_reftag": false, 00:14:43.969 "prchk_guard": false, 00:14:43.969 "ctrlr_loss_timeout_sec": 0, 00:14:43.969 "reconnect_delay_sec": 0, 00:14:43.970 "fast_io_fail_timeout_sec": 0, 00:14:43.970 "psk": "/tmp/tmp.G8MA7X6qfn", 00:14:43.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.970 "hdgst": false, 00:14:43.970 "ddgst": false 00:14:43.970 } 00:14:43.970 }, 00:14:43.970 { 00:14:43.970 "method": "bdev_nvme_set_hotplug", 00:14:43.970 "params": { 00:14:43.970 "period_us": 100000, 00:14:43.970 "enable": false 00:14:43.970 } 00:14:43.970 }, 00:14:43.970 { 00:14:43.970 "method": "bdev_wait_for_examine" 00:14:43.970 } 00:14:43.970 ] 00:14:43.970 }, 00:14:43.970 { 00:14:43.970 "subsystem": "nbd", 00:14:43.970 "config": [] 00:14:43.970 } 00:14:43.970 ] 00:14:43.970 }' 00:14:43.970 [2024-07-15 12:39:16.452068] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:43.970 [2024-07-15 12:39:16.452158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73865 ] 00:14:43.970 [2024-07-15 12:39:16.587381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.229 [2024-07-15 12:39:16.708748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.229 [2024-07-15 12:39:16.867500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:44.489 [2024-07-15 12:39:16.920161] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:44.489 [2024-07-15 12:39:16.921090] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:45.056 12:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.056 12:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:45.056 12:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:45.056 Running I/O for 10 seconds... 00:14:55.051 00:14:55.051 Latency(us) 00:14:55.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.051 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:55.051 Verification LBA range: start 0x0 length 0x2000 00:14:55.051 TLSTESTn1 : 10.03 3379.63 13.20 0.00 0.00 37785.46 8936.73 38130.04 00:14:55.051 =================================================================================================================== 00:14:55.051 Total : 3379.63 13.20 0.00 0.00 37785.46 8936.73 38130.04 00:14:55.051 0 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73865 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73865 ']' 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73865 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73865 00:14:55.051 killing process with pid 73865 00:14:55.051 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.051 00:14:55.051 Latency(us) 00:14:55.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.051 =================================================================================================================== 00:14:55.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73865' 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73865 00:14:55.051 [2024-07-15 12:39:27.674929] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:55.051 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73865 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73833 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73833 ']' 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73833 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73833 00:14:55.308 killing process with pid 73833 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73833' 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73833 00:14:55.308 [2024-07-15 12:39:27.929417] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:55.308 12:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73833 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74004 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74004 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74004 ']' 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.876 12:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.876 [2024-07-15 12:39:28.317561] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:55.876 [2024-07-15 12:39:28.317660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.876 [2024-07-15 12:39:28.458718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.134 [2024-07-15 12:39:28.569028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.134 [2024-07-15 12:39:28.569075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.134 [2024-07-15 12:39:28.569102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.134 [2024-07-15 12:39:28.569117] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.134 [2024-07-15 12:39:28.569124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.134 [2024-07-15 12:39:28.569168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.134 [2024-07-15 12:39:28.623617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:56.700 12:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.700 12:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:56.700 12:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.700 12:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:56.700 12:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.700 12:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.700 12:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.G8MA7X6qfn 00:14:56.700 12:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G8MA7X6qfn 00:14:56.700 12:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:56.958 [2024-07-15 12:39:29.591888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.958 12:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:57.217 12:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:57.474 [2024-07-15 12:39:30.096045] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:57.474 [2024-07-15 12:39:30.096557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.474 12:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:57.733 malloc0 00:14:57.733 12:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:57.991 12:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8MA7X6qfn 00:14:58.249 [2024-07-15 12:39:30.812499] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:58.249 12:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74058 00:14:58.249 12:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:58.249 12:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:58.249 12:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74058 /var/tmp/bdevperf.sock 00:14:58.249 12:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74058 ']' 00:14:58.249 12:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.249 12:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.249 12:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.249 12:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.250 12:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.250 [2024-07-15 12:39:30.882520] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:58.250 [2024-07-15 12:39:30.882927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74058 ] 00:14:58.507 [2024-07-15 12:39:31.017771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.507 [2024-07-15 12:39:31.149045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.766 [2024-07-15 12:39:31.224337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:59.333 12:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.333 12:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:59.333 12:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G8MA7X6qfn 00:14:59.333 12:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:59.591 [2024-07-15 12:39:32.209913] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:59.849 nvme0n1 00:14:59.849 12:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:59.849 Running I/O for 1 seconds... 00:15:00.783 00:15:00.783 Latency(us) 00:15:00.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.783 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:00.783 Verification LBA range: start 0x0 length 0x2000 00:15:00.783 nvme0n1 : 1.02 3404.23 13.30 0.00 0.00 37217.89 6672.76 32648.84 00:15:00.783 =================================================================================================================== 00:15:00.783 Total : 3404.23 13.30 0.00 0.00 37217.89 6672.76 32648.84 00:15:00.783 0 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74058 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74058 ']' 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74058 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74058 00:15:00.783 killing process with pid 74058 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74058' 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74058 00:15:00.783 Received shutdown signal, test time was about 1.000000 seconds 00:15:00.783 00:15:00.783 Latency(us) 00:15:00.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.783 =================================================================================================================== 00:15:00.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.783 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74058 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 74004 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74004 ']' 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74004 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74004 00:15:01.349 killing process with pid 74004 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74004' 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74004 00:15:01.349 [2024-07-15 12:39:33.765252] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:01.349 12:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74004 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74104 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74104 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74104 ']' 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.349 12:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.608 [2024-07-15 12:39:34.067307] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:01.608 [2024-07-15 12:39:34.067396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.608 [2024-07-15 12:39:34.199286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.868 [2024-07-15 12:39:34.315562] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.868 [2024-07-15 12:39:34.315626] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.868 [2024-07-15 12:39:34.315653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.868 [2024-07-15 12:39:34.315662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.868 [2024-07-15 12:39:34.315669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.868 [2024-07-15 12:39:34.315717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.868 [2024-07-15 12:39:34.373710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:02.438 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.438 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:02.438 12:39:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.438 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.438 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.438 12:39:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.438 12:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:15:02.438 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.438 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.438 [2024-07-15 12:39:35.097655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.438 malloc0 00:15:02.698 [2024-07-15 12:39:35.129999] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.698 [2024-07-15 12:39:35.130246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74136 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74136 /var/tmp/bdevperf.sock 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74136 ']' 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.698 12:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.698 [2024-07-15 12:39:35.216854] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:02.698 [2024-07-15 12:39:35.216946] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74136 ] 00:15:02.698 [2024-07-15 12:39:35.357444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.957 [2024-07-15 12:39:35.500123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.957 [2024-07-15 12:39:35.579196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:03.525 12:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.525 12:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:03.525 12:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G8MA7X6qfn 00:15:03.800 12:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:04.079 [2024-07-15 12:39:36.715263] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:04.337 nvme0n1 00:15:04.337 12:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.337 Running I/O for 1 seconds... 00:15:05.274 00:15:05.274 Latency(us) 00:15:05.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.274 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:05.274 Verification LBA range: start 0x0 length 0x2000 00:15:05.274 nvme0n1 : 1.03 3550.38 13.87 0.00 0.00 35472.38 7357.91 21686.46 00:15:05.274 =================================================================================================================== 00:15:05.274 Total : 3550.38 13.87 0.00 0.00 35472.38 7357.91 21686.46 00:15:05.274 0 00:15:05.532 12:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:05.532 12:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.532 12:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.532 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.532 12:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:15:05.532 "subsystems": [ 00:15:05.532 { 00:15:05.532 "subsystem": "keyring", 00:15:05.532 "config": [ 00:15:05.532 { 00:15:05.532 "method": "keyring_file_add_key", 00:15:05.532 "params": { 00:15:05.532 "name": "key0", 00:15:05.532 "path": "/tmp/tmp.G8MA7X6qfn" 00:15:05.532 } 00:15:05.532 } 00:15:05.532 ] 00:15:05.532 }, 00:15:05.532 { 00:15:05.532 "subsystem": "iobuf", 00:15:05.532 "config": [ 00:15:05.532 { 00:15:05.532 "method": "iobuf_set_options", 00:15:05.532 "params": { 00:15:05.532 "small_pool_count": 8192, 00:15:05.532 "large_pool_count": 1024, 00:15:05.532 "small_bufsize": 8192, 00:15:05.532 "large_bufsize": 135168 00:15:05.532 } 00:15:05.532 } 00:15:05.532 ] 00:15:05.532 }, 00:15:05.532 { 00:15:05.532 "subsystem": "sock", 00:15:05.532 "config": [ 00:15:05.532 { 00:15:05.532 "method": "sock_set_default_impl", 00:15:05.532 "params": { 00:15:05.532 "impl_name": "uring" 00:15:05.532 } 00:15:05.532 }, 00:15:05.532 { 00:15:05.532 "method": "sock_impl_set_options", 00:15:05.532 "params": { 00:15:05.532 "impl_name": "ssl", 00:15:05.532 "recv_buf_size": 4096, 00:15:05.532 "send_buf_size": 4096, 00:15:05.532 "enable_recv_pipe": true, 00:15:05.532 "enable_quickack": false, 00:15:05.532 "enable_placement_id": 0, 00:15:05.532 "enable_zerocopy_send_server": true, 00:15:05.532 "enable_zerocopy_send_client": false, 00:15:05.532 "zerocopy_threshold": 0, 00:15:05.532 "tls_version": 0, 00:15:05.532 "enable_ktls": false 00:15:05.532 } 00:15:05.532 }, 00:15:05.532 { 00:15:05.532 "method": "sock_impl_set_options", 00:15:05.532 "params": { 00:15:05.532 "impl_name": "posix", 00:15:05.532 "recv_buf_size": 2097152, 00:15:05.532 "send_buf_size": 2097152, 00:15:05.532 "enable_recv_pipe": true, 00:15:05.532 "enable_quickack": false, 00:15:05.532 "enable_placement_id": 0, 00:15:05.532 "enable_zerocopy_send_server": true, 00:15:05.532 "enable_zerocopy_send_client": false, 00:15:05.532 "zerocopy_threshold": 0, 00:15:05.532 "tls_version": 0, 00:15:05.532 "enable_ktls": false 00:15:05.532 } 00:15:05.532 }, 00:15:05.532 { 00:15:05.532 "method": "sock_impl_set_options", 00:15:05.532 "params": { 00:15:05.532 "impl_name": "uring", 00:15:05.532 "recv_buf_size": 2097152, 00:15:05.532 "send_buf_size": 2097152, 00:15:05.532 "enable_recv_pipe": true, 00:15:05.532 "enable_quickack": false, 00:15:05.532 "enable_placement_id": 0, 00:15:05.532 "enable_zerocopy_send_server": false, 00:15:05.532 "enable_zerocopy_send_client": false, 00:15:05.532 "zerocopy_threshold": 0, 00:15:05.532 "tls_version": 0, 00:15:05.532 "enable_ktls": false 00:15:05.532 } 00:15:05.532 } 00:15:05.532 ] 00:15:05.532 }, 00:15:05.532 { 00:15:05.532 "subsystem": "vmd", 00:15:05.532 "config": [] 00:15:05.532 }, 00:15:05.532 { 00:15:05.532 "subsystem": "accel", 00:15:05.532 "config": [ 00:15:05.532 { 00:15:05.532 "method": "accel_set_options", 00:15:05.532 "params": { 00:15:05.532 "small_cache_size": 128, 00:15:05.532 "large_cache_size": 16, 00:15:05.532 "task_count": 2048, 00:15:05.532 "sequence_count": 2048, 00:15:05.532 "buf_count": 2048 00:15:05.532 } 00:15:05.532 } 00:15:05.532 ] 00:15:05.532 }, 00:15:05.533 { 00:15:05.533 "subsystem": "bdev", 00:15:05.533 "config": [ 00:15:05.533 { 00:15:05.533 "method": "bdev_set_options", 00:15:05.533 "params": { 00:15:05.533 "bdev_io_pool_size": 65535, 00:15:05.533 "bdev_io_cache_size": 256, 00:15:05.533 "bdev_auto_examine": true, 00:15:05.533 "iobuf_small_cache_size": 128, 00:15:05.533 "iobuf_large_cache_size": 16 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "bdev_raid_set_options", 00:15:05.533 "params": { 00:15:05.533 "process_window_size_kb": 1024 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "bdev_iscsi_set_options", 00:15:05.533 "params": { 00:15:05.533 "timeout_sec": 30 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "bdev_nvme_set_options", 00:15:05.533 "params": { 00:15:05.533 "action_on_timeout": "none", 00:15:05.533 "timeout_us": 0, 00:15:05.533 "timeout_admin_us": 0, 00:15:05.533 "keep_alive_timeout_ms": 10000, 00:15:05.533 "arbitration_burst": 0, 00:15:05.533 "low_priority_weight": 0, 00:15:05.533 "medium_priority_weight": 0, 00:15:05.533 "high_priority_weight": 0, 00:15:05.533 "nvme_adminq_poll_period_us": 10000, 00:15:05.533 "nvme_ioq_poll_period_us": 0, 00:15:05.533 "io_queue_requests": 0, 00:15:05.533 "delay_cmd_submit": true, 00:15:05.533 "transport_retry_count": 4, 00:15:05.533 "bdev_retry_count": 3, 00:15:05.533 "transport_ack_timeout": 0, 00:15:05.533 "ctrlr_loss_timeout_sec": 0, 00:15:05.533 "reconnect_delay_sec": 0, 00:15:05.533 "fast_io_fail_timeout_sec": 0, 00:15:05.533 "disable_auto_failback": false, 00:15:05.533 "generate_uuids": false, 00:15:05.533 "transport_tos": 0, 00:15:05.533 "nvme_error_stat": false, 00:15:05.533 "rdma_srq_size": 0, 00:15:05.533 "io_path_stat": false, 00:15:05.533 "allow_accel_sequence": false, 00:15:05.533 "rdma_max_cq_size": 0, 00:15:05.533 "rdma_cm_event_timeout_ms": 0, 00:15:05.533 "dhchap_digests": [ 00:15:05.533 "sha256", 00:15:05.533 "sha384", 00:15:05.533 "sha512" 00:15:05.533 ], 00:15:05.533 "dhchap_dhgroups": [ 00:15:05.533 "null", 00:15:05.533 "ffdhe2048", 00:15:05.533 "ffdhe3072", 00:15:05.533 "ffdhe4096", 00:15:05.533 "ffdhe6144", 00:15:05.533 "ffdhe8192" 00:15:05.533 ] 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "bdev_nvme_set_hotplug", 00:15:05.533 "params": { 00:15:05.533 "period_us": 100000, 00:15:05.533 "enable": false 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "bdev_malloc_create", 00:15:05.533 "params": { 00:15:05.533 "name": "malloc0", 00:15:05.533 "num_blocks": 8192, 00:15:05.533 "block_size": 4096, 00:15:05.533 "physical_block_size": 4096, 00:15:05.533 "uuid": "90aff3e2-a4ae-4eb0-96fc-682da5092059", 00:15:05.533 "optimal_io_boundary": 0 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "bdev_wait_for_examine" 00:15:05.533 } 00:15:05.533 ] 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "subsystem": "nbd", 00:15:05.533 "config": [] 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "subsystem": "scheduler", 00:15:05.533 "config": [ 00:15:05.533 { 00:15:05.533 "method": "framework_set_scheduler", 00:15:05.533 "params": { 00:15:05.533 "name": "static" 00:15:05.533 } 00:15:05.533 } 00:15:05.533 ] 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "subsystem": "nvmf", 00:15:05.533 "config": [ 00:15:05.533 { 00:15:05.533 "method": "nvmf_set_config", 00:15:05.533 "params": { 00:15:05.533 "discovery_filter": "match_any", 00:15:05.533 "admin_cmd_passthru": { 00:15:05.533 "identify_ctrlr": false 00:15:05.533 } 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "nvmf_set_max_subsystems", 00:15:05.533 "params": { 00:15:05.533 "max_subsystems": 1024 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "nvmf_set_crdt", 00:15:05.533 "params": { 00:15:05.533 "crdt1": 0, 00:15:05.533 "crdt2": 0, 00:15:05.533 "crdt3": 0 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "nvmf_create_transport", 00:15:05.533 "params": { 00:15:05.533 "trtype": "TCP", 00:15:05.533 "max_queue_depth": 128, 00:15:05.533 "max_io_qpairs_per_ctrlr": 127, 00:15:05.533 "in_capsule_data_size": 4096, 00:15:05.533 "max_io_size": 131072, 00:15:05.533 "io_unit_size": 131072, 00:15:05.533 "max_aq_depth": 128, 00:15:05.533 "num_shared_buffers": 511, 00:15:05.533 "buf_cache_size": 4294967295, 00:15:05.533 "dif_insert_or_strip": false, 00:15:05.533 "zcopy": false, 00:15:05.533 "c2h_success": false, 00:15:05.533 "sock_priority": 0, 00:15:05.533 "abort_timeout_sec": 1, 00:15:05.533 "ack_timeout": 0, 00:15:05.533 "data_wr_pool_size": 0 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "nvmf_create_subsystem", 00:15:05.533 "params": { 00:15:05.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.533 "allow_any_host": false, 00:15:05.533 "serial_number": "00000000000000000000", 00:15:05.533 "model_number": "SPDK bdev Controller", 00:15:05.533 "max_namespaces": 32, 00:15:05.533 "min_cntlid": 1, 00:15:05.533 "max_cntlid": 65519, 00:15:05.533 "ana_reporting": false 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "nvmf_subsystem_add_host", 00:15:05.533 "params": { 00:15:05.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.533 "host": "nqn.2016-06.io.spdk:host1", 00:15:05.533 "psk": "key0" 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "nvmf_subsystem_add_ns", 00:15:05.533 "params": { 00:15:05.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.533 "namespace": { 00:15:05.533 "nsid": 1, 00:15:05.533 "bdev_name": "malloc0", 00:15:05.533 "nguid": "90AFF3E2A4AE4EB096FC682DA5092059", 00:15:05.533 "uuid": "90aff3e2-a4ae-4eb0-96fc-682da5092059", 00:15:05.533 "no_auto_visible": false 00:15:05.533 } 00:15:05.533 } 00:15:05.533 }, 00:15:05.533 { 00:15:05.533 "method": "nvmf_subsystem_add_listener", 00:15:05.533 "params": { 00:15:05.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.533 "listen_address": { 00:15:05.533 "trtype": "TCP", 00:15:05.533 "adrfam": "IPv4", 00:15:05.533 "traddr": "10.0.0.2", 00:15:05.533 "trsvcid": "4420" 00:15:05.533 }, 00:15:05.533 "secure_channel": true 00:15:05.533 } 00:15:05.533 } 00:15:05.533 ] 00:15:05.533 } 00:15:05.533 ] 00:15:05.533 }' 00:15:05.533 12:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:06.100 12:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:15:06.100 "subsystems": [ 00:15:06.100 { 00:15:06.100 "subsystem": "keyring", 00:15:06.100 "config": [ 00:15:06.100 { 00:15:06.100 "method": "keyring_file_add_key", 00:15:06.100 "params": { 00:15:06.100 "name": "key0", 00:15:06.100 "path": "/tmp/tmp.G8MA7X6qfn" 00:15:06.100 } 00:15:06.100 } 00:15:06.100 ] 00:15:06.100 }, 00:15:06.100 { 00:15:06.100 "subsystem": "iobuf", 00:15:06.100 "config": [ 00:15:06.100 { 00:15:06.100 "method": "iobuf_set_options", 00:15:06.100 "params": { 00:15:06.100 "small_pool_count": 8192, 00:15:06.100 "large_pool_count": 1024, 00:15:06.100 "small_bufsize": 8192, 00:15:06.100 "large_bufsize": 135168 00:15:06.100 } 00:15:06.100 } 00:15:06.100 ] 00:15:06.100 }, 00:15:06.100 { 00:15:06.100 "subsystem": "sock", 00:15:06.100 "config": [ 00:15:06.100 { 00:15:06.100 "method": "sock_set_default_impl", 00:15:06.100 "params": { 00:15:06.100 "impl_name": "uring" 00:15:06.100 } 00:15:06.100 }, 00:15:06.100 { 00:15:06.100 "method": "sock_impl_set_options", 00:15:06.100 "params": { 00:15:06.100 "impl_name": "ssl", 00:15:06.100 "recv_buf_size": 4096, 00:15:06.100 "send_buf_size": 4096, 00:15:06.100 "enable_recv_pipe": true, 00:15:06.100 "enable_quickack": false, 00:15:06.100 "enable_placement_id": 0, 00:15:06.100 "enable_zerocopy_send_server": true, 00:15:06.100 "enable_zerocopy_send_client": false, 00:15:06.100 "zerocopy_threshold": 0, 00:15:06.100 "tls_version": 0, 00:15:06.100 "enable_ktls": false 00:15:06.100 } 00:15:06.100 }, 00:15:06.100 { 00:15:06.100 "method": "sock_impl_set_options", 00:15:06.100 "params": { 00:15:06.100 "impl_name": "posix", 00:15:06.100 "recv_buf_size": 2097152, 00:15:06.100 "send_buf_size": 2097152, 00:15:06.100 "enable_recv_pipe": true, 00:15:06.100 "enable_quickack": false, 00:15:06.100 "enable_placement_id": 0, 00:15:06.100 "enable_zerocopy_send_server": true, 00:15:06.100 "enable_zerocopy_send_client": false, 00:15:06.100 "zerocopy_threshold": 0, 00:15:06.100 "tls_version": 0, 00:15:06.101 "enable_ktls": false 00:15:06.101 } 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "method": "sock_impl_set_options", 00:15:06.101 "params": { 00:15:06.101 "impl_name": "uring", 00:15:06.101 "recv_buf_size": 2097152, 00:15:06.101 "send_buf_size": 2097152, 00:15:06.101 "enable_recv_pipe": true, 00:15:06.101 "enable_quickack": false, 00:15:06.101 "enable_placement_id": 0, 00:15:06.101 "enable_zerocopy_send_server": false, 00:15:06.101 "enable_zerocopy_send_client": false, 00:15:06.101 "zerocopy_threshold": 0, 00:15:06.101 "tls_version": 0, 00:15:06.101 "enable_ktls": false 00:15:06.101 } 00:15:06.101 } 00:15:06.101 ] 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "subsystem": "vmd", 00:15:06.101 "config": [] 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "subsystem": "accel", 00:15:06.101 "config": [ 00:15:06.101 { 00:15:06.101 "method": "accel_set_options", 00:15:06.101 "params": { 00:15:06.101 "small_cache_size": 128, 00:15:06.101 "large_cache_size": 16, 00:15:06.101 "task_count": 2048, 00:15:06.101 "sequence_count": 2048, 00:15:06.101 "buf_count": 2048 00:15:06.101 } 00:15:06.101 } 00:15:06.101 ] 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "subsystem": "bdev", 00:15:06.101 "config": [ 00:15:06.101 { 00:15:06.101 "method": "bdev_set_options", 00:15:06.101 "params": { 00:15:06.101 "bdev_io_pool_size": 65535, 00:15:06.101 "bdev_io_cache_size": 256, 00:15:06.101 "bdev_auto_examine": true, 00:15:06.101 "iobuf_small_cache_size": 128, 00:15:06.101 "iobuf_large_cache_size": 16 00:15:06.101 } 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "method": "bdev_raid_set_options", 00:15:06.101 "params": { 00:15:06.101 "process_window_size_kb": 1024 00:15:06.101 } 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "method": "bdev_iscsi_set_options", 00:15:06.101 "params": { 00:15:06.101 "timeout_sec": 30 00:15:06.101 } 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "method": "bdev_nvme_set_options", 00:15:06.101 "params": { 00:15:06.101 "action_on_timeout": "none", 00:15:06.101 "timeout_us": 0, 00:15:06.101 "timeout_admin_us": 0, 00:15:06.101 "keep_alive_timeout_ms": 10000, 00:15:06.101 "arbitration_burst": 0, 00:15:06.101 "low_priority_weight": 0, 00:15:06.101 "medium_priority_weight": 0, 00:15:06.101 "high_priority_weight": 0, 00:15:06.101 "nvme_adminq_poll_period_us": 10000, 00:15:06.101 "nvme_ioq_poll_period_us": 0, 00:15:06.101 "io_queue_requests": 512, 00:15:06.101 "delay_cmd_submit": true, 00:15:06.101 "transport_retry_count": 4, 00:15:06.101 "bdev_retry_count": 3, 00:15:06.101 "transport_ack_timeout": 0, 00:15:06.101 "ctrlr_loss_timeout_sec": 0, 00:15:06.101 "reconnect_delay_sec": 0, 00:15:06.101 "fast_io_fail_timeout_sec": 0, 00:15:06.101 "disable_auto_failback": false, 00:15:06.101 "generate_uuids": false, 00:15:06.101 "transport_tos": 0, 00:15:06.101 "nvme_error_stat": false, 00:15:06.101 "rdma_srq_size": 0, 00:15:06.101 "io_path_stat": false, 00:15:06.101 "allow_accel_sequence": false, 00:15:06.101 "rdma_max_cq_size": 0, 00:15:06.101 "rdma_cm_event_timeout_ms": 0, 00:15:06.101 "dhchap_digests": [ 00:15:06.101 "sha256", 00:15:06.101 "sha384", 00:15:06.101 "sha512" 00:15:06.101 ], 00:15:06.101 "dhchap_dhgroups": [ 00:15:06.101 "null", 00:15:06.101 "ffdhe2048", 00:15:06.101 "ffdhe3072", 00:15:06.101 "ffdhe4096", 00:15:06.101 "ffdhe6144", 00:15:06.101 "ffdhe8192" 00:15:06.101 ] 00:15:06.101 } 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "method": "bdev_nvme_attach_controller", 00:15:06.101 "params": { 00:15:06.101 "name": "nvme0", 00:15:06.101 "trtype": "TCP", 00:15:06.101 "adrfam": "IPv4", 00:15:06.101 "traddr": "10.0.0.2", 00:15:06.101 "trsvcid": "4420", 00:15:06.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.101 "prchk_reftag": false, 00:15:06.101 "prchk_guard": false, 00:15:06.101 "ctrlr_loss_timeout_sec": 0, 00:15:06.101 "reconnect_delay_sec": 0, 00:15:06.101 "fast_io_fail_timeout_sec": 0, 00:15:06.101 "psk": "key0", 00:15:06.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:06.101 "hdgst": false, 00:15:06.101 "ddgst": false 00:15:06.101 } 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "method": "bdev_nvme_set_hotplug", 00:15:06.101 "params": { 00:15:06.101 "period_us": 100000, 00:15:06.101 "enable": false 00:15:06.101 } 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "method": "bdev_enable_histogram", 00:15:06.101 "params": { 00:15:06.101 "name": "nvme0n1", 00:15:06.101 "enable": true 00:15:06.101 } 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "method": "bdev_wait_for_examine" 00:15:06.101 } 00:15:06.101 ] 00:15:06.101 }, 00:15:06.101 { 00:15:06.101 "subsystem": "nbd", 00:15:06.101 "config": [] 00:15:06.101 } 00:15:06.101 ] 00:15:06.101 }' 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74136 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74136 ']' 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74136 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74136 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:06.101 killing process with pid 74136 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74136' 00:15:06.101 Received shutdown signal, test time was about 1.000000 seconds 00:15:06.101 00:15:06.101 Latency(us) 00:15:06.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.101 =================================================================================================================== 00:15:06.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74136 00:15:06.101 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74136 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74104 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74104 ']' 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74104 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74104 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:06.361 killing process with pid 74104 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74104' 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74104 00:15:06.361 12:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74104 00:15:06.620 12:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:06.620 12:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.620 12:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:15:06.620 "subsystems": [ 00:15:06.620 { 00:15:06.620 "subsystem": "keyring", 00:15:06.620 "config": [ 00:15:06.620 { 00:15:06.620 "method": "keyring_file_add_key", 00:15:06.620 "params": { 00:15:06.620 "name": "key0", 00:15:06.620 "path": "/tmp/tmp.G8MA7X6qfn" 00:15:06.620 } 00:15:06.620 } 00:15:06.620 ] 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "subsystem": "iobuf", 00:15:06.620 "config": [ 00:15:06.620 { 00:15:06.620 "method": "iobuf_set_options", 00:15:06.620 "params": { 00:15:06.620 "small_pool_count": 8192, 00:15:06.620 "large_pool_count": 1024, 00:15:06.620 "small_bufsize": 8192, 00:15:06.620 "large_bufsize": 135168 00:15:06.620 } 00:15:06.620 } 00:15:06.620 ] 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "subsystem": "sock", 00:15:06.620 "config": [ 00:15:06.620 { 00:15:06.620 "method": "sock_set_default_impl", 00:15:06.620 "params": { 00:15:06.620 "impl_name": "uring" 00:15:06.620 } 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "method": "sock_impl_set_options", 00:15:06.620 "params": { 00:15:06.620 "impl_name": "ssl", 00:15:06.620 "recv_buf_size": 4096, 00:15:06.620 "send_buf_size": 4096, 00:15:06.620 "enable_recv_pipe": true, 00:15:06.620 "enable_quickack": false, 00:15:06.620 "enable_placement_id": 0, 00:15:06.620 "enable_zerocopy_send_server": true, 00:15:06.620 "enable_zerocopy_send_client": false, 00:15:06.620 "zerocopy_threshold": 0, 00:15:06.620 "tls_version": 0, 00:15:06.620 "enable_ktls": false 00:15:06.620 } 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "method": "sock_impl_set_options", 00:15:06.620 "params": { 00:15:06.620 "impl_name": "posix", 00:15:06.620 "recv_buf_size": 2097152, 00:15:06.620 "send_buf_size": 2097152, 00:15:06.620 "enable_recv_pipe": true, 00:15:06.620 "enable_quickack": false, 00:15:06.620 "enable_placement_id": 0, 00:15:06.620 "enable_zerocopy_send_server": true, 00:15:06.620 "enable_zerocopy_send_client": false, 00:15:06.620 "zerocopy_threshold": 0, 00:15:06.620 "tls_version": 0, 00:15:06.620 "enable_ktls": false 00:15:06.620 } 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "method": "sock_impl_set_options", 00:15:06.620 "params": { 00:15:06.620 "impl_name": "uring", 00:15:06.620 "recv_buf_size": 2097152, 00:15:06.620 "send_buf_size": 2097152, 00:15:06.620 "enable_recv_pipe": true, 00:15:06.620 "enable_quickack": false, 00:15:06.620 "enable_placement_id": 0, 00:15:06.620 "enable_zerocopy_send_server": false, 00:15:06.620 "enable_zerocopy_send_client": false, 00:15:06.620 "zerocopy_threshold": 0, 00:15:06.620 "tls_version": 0, 00:15:06.620 "enable_ktls": false 00:15:06.620 } 00:15:06.620 } 00:15:06.620 ] 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "subsystem": "vmd", 00:15:06.620 "config": [] 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "subsystem": "accel", 00:15:06.620 "config": [ 00:15:06.620 { 00:15:06.620 "method": "accel_set_options", 00:15:06.620 "params": { 00:15:06.620 "small_cache_size": 128, 00:15:06.620 "large_cache_size": 16, 00:15:06.620 "task_count": 2048, 00:15:06.620 "sequence_count": 2048, 00:15:06.620 "buf_count": 2048 00:15:06.620 } 00:15:06.620 } 00:15:06.620 ] 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "subsystem": "bdev", 00:15:06.620 "config": [ 00:15:06.620 { 00:15:06.620 "method": "bdev_set_options", 00:15:06.620 "params": { 00:15:06.620 "bdev_io_pool_size": 65535, 00:15:06.620 "bdev_io_cache_size": 256, 00:15:06.620 "bdev_auto_examine": true, 00:15:06.620 "iobuf_small_cache_size": 128, 00:15:06.620 "iobuf_large_cache_size": 16 00:15:06.620 } 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "method": "bdev_raid_set_options", 00:15:06.620 "params": { 00:15:06.620 "process_window_size_kb": 1024 00:15:06.620 } 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "method": "bdev_iscsi_set_options", 00:15:06.620 "params": { 00:15:06.620 "timeout_sec": 30 00:15:06.620 } 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "method": "bdev_nvme_set_options", 00:15:06.620 "params": { 00:15:06.620 "action_on_timeout": "none", 00:15:06.620 "timeout_us": 0, 00:15:06.620 "timeout_admin_us": 0, 00:15:06.620 "keep_alive_timeout_ms": 10000, 00:15:06.620 "arbitration_burst": 0, 00:15:06.620 "low_priority_weight": 0, 00:15:06.620 "medium_priority_weight": 0, 00:15:06.620 "high_priority_weight": 0, 00:15:06.620 "nvme_adminq_poll_period_us": 10000, 00:15:06.620 "nvme_ioq_poll_period_us": 0, 00:15:06.620 "io_queue_requests": 0, 00:15:06.620 "delay_cmd_submit": true, 00:15:06.620 "transport_retry_count": 4, 00:15:06.620 "bdev_retry_count": 3, 00:15:06.620 "transport_ack_timeout": 0, 00:15:06.620 "ctrlr_loss_timeout_sec": 0, 00:15:06.620 "reconnect_delay_sec": 0, 00:15:06.620 "fast_io_fail_timeout_sec": 0, 00:15:06.620 "disable_auto_failback": false, 00:15:06.620 "generate_uuids": false, 00:15:06.620 "transport_tos": 0, 00:15:06.620 "nvme_error_stat": false, 00:15:06.620 "rdma_srq_size": 0, 00:15:06.620 "io_path_stat": false, 00:15:06.620 "allow_accel_sequence": false, 00:15:06.620 "rdma_max_cq_size": 0, 00:15:06.620 "rdma_cm_event_timeout_ms": 0, 00:15:06.620 "dhchap_digests": [ 00:15:06.620 "sha256", 00:15:06.620 "sha384", 00:15:06.620 "sha512" 00:15:06.620 ], 00:15:06.620 "dhchap_dhgroups": [ 00:15:06.620 "null", 00:15:06.620 "ffdhe2048", 00:15:06.620 "ffdhe3072", 00:15:06.620 "ffdhe4096", 00:15:06.620 "ffdhe6144", 00:15:06.620 "ffdhe8192" 00:15:06.620 ] 00:15:06.620 } 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "method": "bdev_nvme_set_hotplug", 00:15:06.620 "params": { 00:15:06.620 "period_us": 100000, 00:15:06.620 "enable": false 00:15:06.620 } 00:15:06.620 }, 00:15:06.620 { 00:15:06.620 "method": "bdev_malloc_create", 00:15:06.620 "params": { 00:15:06.620 "name": "malloc0", 00:15:06.621 "num_blocks": 8192, 00:15:06.621 "block_size": 4096, 00:15:06.621 "physical_block_size": 4096, 00:15:06.621 "uuid": "90aff3e2-a4ae-4eb0-96fc-682da5092059", 00:15:06.621 "optimal_io_boundary": 0 00:15:06.621 } 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "method": "bdev_wait_for_examine" 00:15:06.621 } 00:15:06.621 ] 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "subsystem": "nbd", 00:15:06.621 "config": [] 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "subsystem": "scheduler", 00:15:06.621 "config": [ 00:15:06.621 { 00:15:06.621 "method": "framework_set_scheduler", 00:15:06.621 "params": { 00:15:06.621 "name": "static" 00:15:06.621 } 00:15:06.621 } 00:15:06.621 ] 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "subsystem": "nvmf", 00:15:06.621 "config": [ 00:15:06.621 { 00:15:06.621 "method": "nvmf_set_config", 00:15:06.621 "params": { 00:15:06.621 "discovery_filter": "match_any", 00:15:06.621 "admin_cmd_passthru": { 00:15:06.621 "identify_ctrlr": false 00:15:06.621 } 00:15:06.621 } 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "method": "nvmf_set_max_subsystems", 00:15:06.621 "params": { 00:15:06.621 "max_subsystems": 1024 00:15:06.621 } 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "method": "nvmf_set_crdt", 00:15:06.621 "params": { 00:15:06.621 "crdt1": 0, 00:15:06.621 "crdt2": 0, 00:15:06.621 "crdt3": 0 00:15:06.621 } 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "method": "nvmf_create_transport", 00:15:06.621 "params": { 00:15:06.621 "trtype": "TCP", 00:15:06.621 "max_queue_depth": 128, 00:15:06.621 "max_io_qpairs_per_ctrlr": 127, 00:15:06.621 "in_capsule_data_size": 4096, 00:15:06.621 "max_io_size": 131072, 00:15:06.621 "io_unit_size": 131072, 00:15:06.621 "max_aq_depth": 128, 00:15:06.621 "num_shared_buffers": 511, 00:15:06.621 "buf_cache_size": 4294967295, 00:15:06.621 "dif_insert_or_strip": false, 00:15:06.621 "zcopy": false, 00:15:06.621 "c2h_success": false, 00:15:06.621 "sock_priority": 0, 00:15:06.621 "abort_timeout_sec": 1, 00:15:06.621 "ack_timeout": 0, 00:15:06.621 "data_wr_pool_size": 0 00:15:06.621 } 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "method": "nvmf_create_subsystem", 00:15:06.621 "params": { 00:15:06.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.621 "allow_any_host": false, 00:15:06.621 "serial_number": "00000000000000000000", 00:15:06.621 "model_number": "SPDK bdev Controller", 00:15:06.621 "max_namespaces": 32, 00:15:06.621 "min_cntlid": 1, 00:15:06.621 "max_cntlid": 65519, 00:15:06.621 "ana_reporting": false 00:15:06.621 } 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "method": "nvmf_subsystem_add_host", 00:15:06.621 "params": { 00:15:06.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.621 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.621 "psk": "key0" 00:15:06.621 } 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "method": "nvmf_subsystem_add_ns", 00:15:06.621 "params": { 00:15:06.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.621 "namespace": { 00:15:06.621 "nsid": 1, 00:15:06.621 "bdev_name": "malloc0", 00:15:06.621 "nguid": "90AFF3E2A4AE4EB096FC682DA5092059", 00:15:06.621 "uuid": "90aff3e2-a4ae-4eb0-96fc-682da5092059", 00:15:06.621 "no_auto_visible": false 00:15:06.621 } 00:15:06.621 } 00:15:06.621 }, 00:15:06.621 { 00:15:06.621 "method": "nvmf_subsystem_add_listener", 00:15:06.621 "params": { 00:15:06.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.621 "listen_address": { 00:15:06.621 "trtype": "TCP", 00:15:06.621 "adrfam": "IPv4", 00:15:06.621 "traddr": "10.0.0.2", 00:15:06.621 "trsvcid": "4420" 00:15:06.621 }, 00:15:06.621 "secure_channel": true 00:15:06.621 } 00:15:06.621 } 00:15:06.621 ] 00:15:06.621 } 00:15:06.621 ] 00:15:06.621 }' 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74202 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74202 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74202 ']' 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.621 12:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.621 [2024-07-15 12:39:39.190207] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:06.621 [2024-07-15 12:39:39.190328] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.880 [2024-07-15 12:39:39.331065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.880 [2024-07-15 12:39:39.441783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.880 [2024-07-15 12:39:39.441852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.880 [2024-07-15 12:39:39.441865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.880 [2024-07-15 12:39:39.441873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.880 [2024-07-15 12:39:39.441881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.880 [2024-07-15 12:39:39.441968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.139 [2024-07-15 12:39:39.614813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:07.139 [2024-07-15 12:39:39.691797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.139 [2024-07-15 12:39:39.723702] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:07.139 [2024-07-15 12:39:39.723938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74234 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74234 /var/tmp/bdevperf.sock 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74234 ']' 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:07.708 12:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:15:07.708 "subsystems": [ 00:15:07.708 { 00:15:07.708 "subsystem": "keyring", 00:15:07.708 "config": [ 00:15:07.708 { 00:15:07.708 "method": "keyring_file_add_key", 00:15:07.708 "params": { 00:15:07.708 "name": "key0", 00:15:07.708 "path": "/tmp/tmp.G8MA7X6qfn" 00:15:07.708 } 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "subsystem": "iobuf", 00:15:07.708 "config": [ 00:15:07.708 { 00:15:07.708 "method": "iobuf_set_options", 00:15:07.708 "params": { 00:15:07.708 "small_pool_count": 8192, 00:15:07.708 "large_pool_count": 1024, 00:15:07.708 "small_bufsize": 8192, 00:15:07.708 "large_bufsize": 135168 00:15:07.708 } 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "subsystem": "sock", 00:15:07.708 "config": [ 00:15:07.708 { 00:15:07.708 "method": "sock_set_default_impl", 00:15:07.708 "params": { 00:15:07.708 "impl_name": "uring" 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "sock_impl_set_options", 00:15:07.708 "params": { 00:15:07.708 "impl_name": "ssl", 00:15:07.708 "recv_buf_size": 4096, 00:15:07.708 "send_buf_size": 4096, 00:15:07.708 "enable_recv_pipe": true, 00:15:07.708 "enable_quickack": false, 00:15:07.708 "enable_placement_id": 0, 00:15:07.708 "enable_zerocopy_send_server": true, 00:15:07.708 "enable_zerocopy_send_client": false, 00:15:07.708 "zerocopy_threshold": 0, 00:15:07.708 "tls_version": 0, 00:15:07.708 "enable_ktls": false 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "sock_impl_set_options", 00:15:07.708 "params": { 00:15:07.708 "impl_name": "posix", 00:15:07.708 "recv_buf_size": 2097152, 00:15:07.708 "send_buf_size": 2097152, 00:15:07.708 "enable_recv_pipe": true, 00:15:07.708 "enable_quickack": false, 00:15:07.708 "enable_placement_id": 0, 00:15:07.708 "enable_zerocopy_send_server": true, 00:15:07.708 "enable_zerocopy_send_client": false, 00:15:07.708 "zerocopy_threshold": 0, 00:15:07.708 "tls_version": 0, 00:15:07.708 "enable_ktls": false 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "sock_impl_set_options", 00:15:07.708 "params": { 00:15:07.708 "impl_name": "uring", 00:15:07.708 "recv_buf_size": 2097152, 00:15:07.708 "send_buf_size": 2097152, 00:15:07.708 "enable_recv_pipe": true, 00:15:07.708 "enable_quickack": false, 00:15:07.708 "enable_placement_id": 0, 00:15:07.708 "enable_zerocopy_send_server": false, 00:15:07.708 "enable_zerocopy_send_client": false, 00:15:07.708 "zerocopy_threshold": 0, 00:15:07.708 "tls_version": 0, 00:15:07.708 "enable_ktls": false 00:15:07.708 } 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "subsystem": "vmd", 00:15:07.708 "config": [] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "subsystem": "accel", 00:15:07.708 "config": [ 00:15:07.708 { 00:15:07.708 "method": "accel_set_options", 00:15:07.708 "params": { 00:15:07.708 "small_cache_size": 128, 00:15:07.708 "large_cache_size": 16, 00:15:07.708 "task_count": 2048, 00:15:07.708 "sequence_count": 2048, 00:15:07.708 "buf_count": 2048 00:15:07.708 } 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "subsystem": "bdev", 00:15:07.708 "config": [ 00:15:07.708 { 00:15:07.708 "method": "bdev_set_options", 00:15:07.708 "params": { 00:15:07.708 "bdev_io_pool_size": 65535, 00:15:07.708 "bdev_io_cache_size": 256, 00:15:07.708 "bdev_auto_examine": true, 00:15:07.708 "iobuf_small_cache_size": 128, 00:15:07.708 "iobuf_large_cache_size": 16 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "bdev_raid_set_options", 00:15:07.708 "params": { 00:15:07.708 "process_window_size_kb": 1024 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "bdev_iscsi_set_options", 00:15:07.708 "params": { 00:15:07.708 "timeout_sec": 30 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "bdev_nvme_set_options", 00:15:07.708 "params": { 00:15:07.708 "action_on_timeout": "none", 00:15:07.708 "timeout_us": 0, 00:15:07.708 "timeout_admin_us": 0, 00:15:07.708 "keep_alive_timeout_ms": 10000, 00:15:07.708 "arbitration_burst": 0, 00:15:07.708 "low_priority_weight": 0, 00:15:07.708 "medium_priority_weight": 0, 00:15:07.708 "high_priority_weight": 0, 00:15:07.708 "nvme_adminq_poll_period_us": 10000, 00:15:07.708 "nvme_ioq_poll_period_us": 0, 00:15:07.708 "io_queue_requests": 512, 00:15:07.708 "delay_cmd_submit": true, 00:15:07.708 "transport_retry_count": 4, 00:15:07.708 "bdev_retry_count": 3, 00:15:07.708 "transport_ack_timeout": 0, 00:15:07.708 "ctrlr_loss_timeout_sec": 0, 00:15:07.708 "reconnect_delay_sec": 0, 00:15:07.709 "fast_io_fail_timeout_sec": 0, 00:15:07.709 "disable_auto_failback": false, 00:15:07.709 "generate_uuids": false, 00:15:07.709 "transport_tos": 0, 00:15:07.709 "nvme_error_stat": false, 00:15:07.709 "rdma_srq_size": 0, 00:15:07.709 "io_path_stat": false, 00:15:07.709 "allow_accel_sequence": false, 00:15:07.709 "rdma_max_cq_size": 0, 00:15:07.709 "rdma_cm_event_timeout_ms": 0, 00:15:07.709 "dhchap_digests": [ 00:15:07.709 "sha256", 00:15:07.709 "sha384", 00:15:07.709 "sha512" 00:15:07.709 ], 00:15:07.709 "dhchap_dhgroups": [ 00:15:07.709 "null", 00:15:07.709 "ffdhe2048", 00:15:07.709 "ffdhe3072", 00:15:07.709 "ffdhe4096", 00:15:07.709 "ffdhe6144", 00:15:07.709 "ffdhe8192" 00:15:07.709 ] 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_nvme_attach_controller", 00:15:07.709 "params": { 00:15:07.709 "name": "nvme0", 00:15:07.709 "trtype": "TCP", 00:15:07.709 "adrfam": "IPv4", 00:15:07.709 "traddr": "10.0.0.2", 00:15:07.709 "trsvcid": "4420", 00:15:07.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.709 "prchk_reftag": false, 00:15:07.709 "prchk_guard": false, 00:15:07.709 "ctrlr_loss_timeout_sec": 0, 00:15:07.709 "reconnect_delay_sec": 0, 00:15:07.709 "fast_io_fail_timeout_sec": 0, 00:15:07.709 "psk": "key0", 00:15:07.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.709 "hdgst": false, 00:15:07.709 "ddgst": false 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_nvme_set_hotplug", 00:15:07.709 "params": { 00:15:07.709 "period_us": 100000, 00:15:07.709 "enable": false 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_enable_histogram", 00:15:07.709 "params": { 00:15:07.709 "name": "nvme0n1", 00:15:07.709 "enable": true 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_wait_for_examine" 00:15:07.709 } 00:15:07.709 ] 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "subsystem": "nbd", 00:15:07.709 "config": [] 00:15:07.709 } 00:15:07.709 ] 00:15:07.709 }' 00:15:07.709 [2024-07-15 12:39:40.306083] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:07.709 [2024-07-15 12:39:40.306209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74234 ] 00:15:07.968 [2024-07-15 12:39:40.448335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.968 [2024-07-15 12:39:40.571802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.226 [2024-07-15 12:39:40.708601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:08.226 [2024-07-15 12:39:40.758094] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.793 12:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.793 12:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:08.793 12:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:08.793 12:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:09.051 12:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.051 12:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:09.051 Running I/O for 1 seconds... 00:15:10.429 00:15:10.429 Latency(us) 00:15:10.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.429 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:10.429 Verification LBA range: start 0x0 length 0x2000 00:15:10.429 nvme0n1 : 1.02 3977.79 15.54 0.00 0.00 31780.47 7000.44 30146.56 00:15:10.429 =================================================================================================================== 00:15:10.429 Total : 3977.79 15.54 0.00 0.00 31780.47 7000.44 30146.56 00:15:10.429 0 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:10.429 nvmf_trace.0 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74234 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74234 ']' 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74234 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74234 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:10.429 killing process with pid 74234 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74234' 00:15:10.429 Received shutdown signal, test time was about 1.000000 seconds 00:15:10.429 00:15:10.429 Latency(us) 00:15:10.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.429 =================================================================================================================== 00:15:10.429 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74234 00:15:10.429 12:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74234 00:15:10.429 12:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:10.429 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.429 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:10.429 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.429 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:10.429 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.429 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.690 rmmod nvme_tcp 00:15:10.690 rmmod nvme_fabrics 00:15:10.690 rmmod nvme_keyring 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74202 ']' 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74202 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74202 ']' 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74202 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74202 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:10.690 killing process with pid 74202 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74202' 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74202 00:15:10.690 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74202 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ajMHV1bz32 /tmp/tmp.9IkyMq36t5 /tmp/tmp.G8MA7X6qfn 00:15:10.948 00:15:10.948 real 1m28.338s 00:15:10.948 user 2m15.978s 00:15:10.948 sys 0m31.156s 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.948 12:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.948 ************************************ 00:15:10.948 END TEST nvmf_tls 00:15:10.948 ************************************ 00:15:10.948 12:39:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:10.948 12:39:43 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:10.948 12:39:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:10.948 12:39:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.948 12:39:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.948 ************************************ 00:15:10.948 START TEST nvmf_fips 00:15:10.948 ************************************ 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:10.948 * Looking for test storage... 00:15:10.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.948 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:11.207 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:11.208 Error setting digest 00:15:11.208 004231574B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:11.208 004231574B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.208 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:11.209 Cannot find device "nvmf_tgt_br" 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.209 Cannot find device "nvmf_tgt_br2" 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:11.209 Cannot find device "nvmf_tgt_br" 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:11.209 Cannot find device "nvmf_tgt_br2" 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:11.209 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:11.468 12:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:11.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:11.468 00:15:11.468 --- 10.0.0.2 ping statistics --- 00:15:11.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.468 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:11.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:11.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:11.468 00:15:11.468 --- 10.0.0.3 ping statistics --- 00:15:11.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.468 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:11.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:11.468 00:15:11.468 --- 10.0.0.1 ping statistics --- 00:15:11.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.468 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:11.468 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74508 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74508 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74508 ']' 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.727 12:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:11.727 [2024-07-15 12:39:44.263923] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:11.727 [2024-07-15 12:39:44.264028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.727 [2024-07-15 12:39:44.402969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.986 [2024-07-15 12:39:44.529551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.986 [2024-07-15 12:39:44.529613] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.986 [2024-07-15 12:39:44.529624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.986 [2024-07-15 12:39:44.529632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.987 [2024-07-15 12:39:44.529639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.987 [2024-07-15 12:39:44.529662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.987 [2024-07-15 12:39:44.585166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:12.958 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.958 [2024-07-15 12:39:45.567389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.958 [2024-07-15 12:39:45.583301] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:12.958 [2024-07-15 12:39:45.583524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.958 [2024-07-15 12:39:45.615990] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:12.958 malloc0 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74548 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74548 /var/tmp/bdevperf.sock 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74548 ']' 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.216 12:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 [2024-07-15 12:39:45.725537] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:13.216 [2024-07-15 12:39:45.725639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74548 ] 00:15:13.216 [2024-07-15 12:39:45.868215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.474 [2024-07-15 12:39:46.031003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.474 [2024-07-15 12:39:46.107455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:14.041 12:39:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.041 12:39:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:14.041 12:39:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:14.300 [2024-07-15 12:39:46.939679] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:14.300 [2024-07-15 12:39:46.939867] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:14.559 TLSTESTn1 00:15:14.559 12:39:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.559 Running I/O for 10 seconds... 00:15:24.532 00:15:24.532 Latency(us) 00:15:24.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.532 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:24.532 Verification LBA range: start 0x0 length 0x2000 00:15:24.532 TLSTESTn1 : 10.03 3556.11 13.89 0.00 0.00 35907.37 8340.95 33125.47 00:15:24.533 =================================================================================================================== 00:15:24.533 Total : 3556.11 13.89 0.00 0.00 35907.37 8340.95 33125.47 00:15:24.533 0 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:24.533 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:24.790 nvmf_trace.0 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74548 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74548 ']' 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74548 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74548 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74548' 00:15:24.790 killing process with pid 74548 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74548 00:15:24.790 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74548 00:15:24.790 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.790 00:15:24.790 Latency(us) 00:15:24.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.790 =================================================================================================================== 00:15:24.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.790 [2024-07-15 12:39:57.330812] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:25.048 12:39:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:25.048 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.048 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.307 rmmod nvme_tcp 00:15:25.307 rmmod nvme_fabrics 00:15:25.307 rmmod nvme_keyring 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74508 ']' 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74508 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74508 ']' 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74508 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74508 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:25.307 killing process with pid 74508 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74508' 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74508 00:15:25.307 [2024-07-15 12:39:57.831029] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:25.307 12:39:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74508 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:25.565 00:15:25.565 real 0m14.587s 00:15:25.565 user 0m18.789s 00:15:25.565 sys 0m6.692s 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.565 ************************************ 00:15:25.565 END TEST nvmf_fips 00:15:25.565 ************************************ 00:15:25.565 12:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:25.565 12:39:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:25.565 12:39:58 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:15:25.565 12:39:58 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:15:25.565 12:39:58 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:25.565 12:39:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:25.565 12:39:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.565 12:39:58 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:25.565 12:39:58 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.565 12:39:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.565 12:39:58 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:25.565 12:39:58 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:25.565 12:39:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:25.565 12:39:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.565 12:39:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.565 ************************************ 00:15:25.565 START TEST nvmf_identify 00:15:25.565 ************************************ 00:15:25.565 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:25.824 * Looking for test storage... 00:15:25.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.824 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:25.825 Cannot find device "nvmf_tgt_br" 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.825 Cannot find device "nvmf_tgt_br2" 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:25.825 Cannot find device "nvmf_tgt_br" 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:25.825 Cannot find device "nvmf_tgt_br2" 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.825 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:26.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:15:26.084 00:15:26.084 --- 10.0.0.2 ping statistics --- 00:15:26.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.084 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:26.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:15:26.084 00:15:26.084 --- 10.0.0.3 ping statistics --- 00:15:26.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.084 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:26.084 00:15:26.084 --- 10.0.0.1 ping statistics --- 00:15:26.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.084 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74892 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74892 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74892 ']' 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.084 12:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:26.084 [2024-07-15 12:39:58.716581] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:26.084 [2024-07-15 12:39:58.716656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.343 [2024-07-15 12:39:58.853323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.343 [2024-07-15 12:39:58.989171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.343 [2024-07-15 12:39:58.989235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.343 [2024-07-15 12:39:58.989249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.343 [2024-07-15 12:39:58.989260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.343 [2024-07-15 12:39:58.989269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.343 [2024-07-15 12:39:58.989409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.343 [2024-07-15 12:39:58.990394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.343 [2024-07-15 12:39:58.990523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.343 [2024-07-15 12:39:58.990532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.600 [2024-07-15 12:39:59.049996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.167 [2024-07-15 12:39:59.778951] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.167 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 Malloc0 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 [2024-07-15 12:39:59.879302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.427 12:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.428 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.428 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.428 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.428 12:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:27.428 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.428 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.428 [ 00:15:27.428 { 00:15:27.428 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.428 "subtype": "Discovery", 00:15:27.428 "listen_addresses": [ 00:15:27.428 { 00:15:27.428 "trtype": "TCP", 00:15:27.428 "adrfam": "IPv4", 00:15:27.428 "traddr": "10.0.0.2", 00:15:27.428 "trsvcid": "4420" 00:15:27.428 } 00:15:27.428 ], 00:15:27.428 "allow_any_host": true, 00:15:27.428 "hosts": [] 00:15:27.428 }, 00:15:27.428 { 00:15:27.428 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.428 "subtype": "NVMe", 00:15:27.428 "listen_addresses": [ 00:15:27.428 { 00:15:27.428 "trtype": "TCP", 00:15:27.428 "adrfam": "IPv4", 00:15:27.428 "traddr": "10.0.0.2", 00:15:27.428 "trsvcid": "4420" 00:15:27.428 } 00:15:27.428 ], 00:15:27.428 "allow_any_host": true, 00:15:27.428 "hosts": [], 00:15:27.428 "serial_number": "SPDK00000000000001", 00:15:27.428 "model_number": "SPDK bdev Controller", 00:15:27.428 "max_namespaces": 32, 00:15:27.428 "min_cntlid": 1, 00:15:27.428 "max_cntlid": 65519, 00:15:27.428 "namespaces": [ 00:15:27.428 { 00:15:27.428 "nsid": 1, 00:15:27.428 "bdev_name": "Malloc0", 00:15:27.428 "name": "Malloc0", 00:15:27.428 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:27.428 "eui64": "ABCDEF0123456789", 00:15:27.428 "uuid": "aa8d378a-7edb-40af-b178-dd47e1f31b92" 00:15:27.428 } 00:15:27.428 ] 00:15:27.428 } 00:15:27.428 ] 00:15:27.428 12:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.428 12:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:27.428 [2024-07-15 12:39:59.935702] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:27.428 [2024-07-15 12:39:59.935973] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74927 ] 00:15:27.428 [2024-07-15 12:40:00.084309] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:27.428 [2024-07-15 12:40:00.084386] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:27.428 [2024-07-15 12:40:00.084394] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:27.428 [2024-07-15 12:40:00.084409] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:27.428 [2024-07-15 12:40:00.084418] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:27.428 [2024-07-15 12:40:00.084586] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:27.428 [2024-07-15 12:40:00.084692] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11352c0 0 00:15:27.428 [2024-07-15 12:40:00.096759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:27.428 [2024-07-15 12:40:00.096788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:27.428 [2024-07-15 12:40:00.096795] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:27.428 [2024-07-15 12:40:00.096799] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:27.428 [2024-07-15 12:40:00.096859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.096868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.096873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.428 [2024-07-15 12:40:00.096890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:27.428 [2024-07-15 12:40:00.096923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.428 [2024-07-15 12:40:00.104769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.428 [2024-07-15 12:40:00.104801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.428 [2024-07-15 12:40:00.104807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.104812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.428 [2024-07-15 12:40:00.104825] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:27.428 [2024-07-15 12:40:00.104836] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:27.428 [2024-07-15 12:40:00.104844] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:27.428 [2024-07-15 12:40:00.104864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.104870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.104875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.428 [2024-07-15 12:40:00.104885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.428 [2024-07-15 12:40:00.104914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.428 [2024-07-15 12:40:00.105021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.428 [2024-07-15 12:40:00.105029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.428 [2024-07-15 12:40:00.105033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.428 [2024-07-15 12:40:00.105044] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:27.428 [2024-07-15 12:40:00.105053] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:27.428 [2024-07-15 12:40:00.105062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.428 [2024-07-15 12:40:00.105079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.428 [2024-07-15 12:40:00.105100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.428 [2024-07-15 12:40:00.105178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.428 [2024-07-15 12:40:00.105185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.428 [2024-07-15 12:40:00.105189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.428 [2024-07-15 12:40:00.105201] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:27.428 [2024-07-15 12:40:00.105211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:27.428 [2024-07-15 12:40:00.105219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.428 [2024-07-15 12:40:00.105235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.428 [2024-07-15 12:40:00.105253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.428 [2024-07-15 12:40:00.105334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.428 [2024-07-15 12:40:00.105354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.428 [2024-07-15 12:40:00.105358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.428 [2024-07-15 12:40:00.105369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:27.428 [2024-07-15 12:40:00.105380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.428 [2024-07-15 12:40:00.105397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.428 [2024-07-15 12:40:00.105415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.428 [2024-07-15 12:40:00.105484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.428 [2024-07-15 12:40:00.105492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.428 [2024-07-15 12:40:00.105496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.428 [2024-07-15 12:40:00.105506] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:27.428 [2024-07-15 12:40:00.105512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:27.428 [2024-07-15 12:40:00.105520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:27.428 [2024-07-15 12:40:00.105627] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:27.428 [2024-07-15 12:40:00.105633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:27.428 [2024-07-15 12:40:00.105643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.428 [2024-07-15 12:40:00.105660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.428 [2024-07-15 12:40:00.105679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.428 [2024-07-15 12:40:00.105790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.428 [2024-07-15 12:40:00.105800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.428 [2024-07-15 12:40:00.105804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.428 [2024-07-15 12:40:00.105808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.428 [2024-07-15 12:40:00.105814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:27.429 [2024-07-15 12:40:00.105825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.105831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.105835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.105843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.429 [2024-07-15 12:40:00.105864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.429 [2024-07-15 12:40:00.105937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.429 [2024-07-15 12:40:00.105944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.429 [2024-07-15 12:40:00.105948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.105953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.429 [2024-07-15 12:40:00.105958] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:27.429 [2024-07-15 12:40:00.105964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:27.429 [2024-07-15 12:40:00.105973] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:27.429 [2024-07-15 12:40:00.105985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:27.429 [2024-07-15 12:40:00.105998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.106011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.429 [2024-07-15 12:40:00.106037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.429 [2024-07-15 12:40:00.106171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.429 [2024-07-15 12:40:00.106179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.429 [2024-07-15 12:40:00.106183] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106188] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11352c0): datao=0, datal=4096, cccid=0 00:15:27.429 [2024-07-15 12:40:00.106194] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1176940) on tqpair(0x11352c0): expected_datao=0, payload_size=4096 00:15:27.429 [2024-07-15 12:40:00.106199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106208] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106213] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.429 [2024-07-15 12:40:00.106234] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.429 [2024-07-15 12:40:00.106238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.429 [2024-07-15 12:40:00.106253] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:27.429 [2024-07-15 12:40:00.106259] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:27.429 [2024-07-15 12:40:00.106264] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:27.429 [2024-07-15 12:40:00.106270] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:27.429 [2024-07-15 12:40:00.106275] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:27.429 [2024-07-15 12:40:00.106281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:27.429 [2024-07-15 12:40:00.106290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:27.429 [2024-07-15 12:40:00.106299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.106317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.429 [2024-07-15 12:40:00.106338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.429 [2024-07-15 12:40:00.106430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.429 [2024-07-15 12:40:00.106437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.429 [2024-07-15 12:40:00.106442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.429 [2024-07-15 12:40:00.106454] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.106470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.429 [2024-07-15 12:40:00.106477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.106492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.429 [2024-07-15 12:40:00.106499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.106514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.429 [2024-07-15 12:40:00.106521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.106536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.429 [2024-07-15 12:40:00.106541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:27.429 [2024-07-15 12:40:00.106556] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:27.429 [2024-07-15 12:40:00.106564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.106576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.429 [2024-07-15 12:40:00.106597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176940, cid 0, qid 0 00:15:27.429 [2024-07-15 12:40:00.106605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176ac0, cid 1, qid 0 00:15:27.429 [2024-07-15 12:40:00.106610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176c40, cid 2, qid 0 00:15:27.429 [2024-07-15 12:40:00.106616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176dc0, cid 3, qid 0 00:15:27.429 [2024-07-15 12:40:00.106621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176f40, cid 4, qid 0 00:15:27.429 [2024-07-15 12:40:00.106772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.429 [2024-07-15 12:40:00.106781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.429 [2024-07-15 12:40:00.106786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176f40) on tqpair=0x11352c0 00:15:27.429 [2024-07-15 12:40:00.106796] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:27.429 [2024-07-15 12:40:00.106807] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:27.429 [2024-07-15 12:40:00.106820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.106833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.429 [2024-07-15 12:40:00.106855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176f40, cid 4, qid 0 00:15:27.429 [2024-07-15 12:40:00.106956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.429 [2024-07-15 12:40:00.106970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.429 [2024-07-15 12:40:00.106975] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106980] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11352c0): datao=0, datal=4096, cccid=4 00:15:27.429 [2024-07-15 12:40:00.106985] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1176f40) on tqpair(0x11352c0): expected_datao=0, payload_size=4096 00:15:27.429 [2024-07-15 12:40:00.106990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.106998] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.107002] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.107011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.429 [2024-07-15 12:40:00.107018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.429 [2024-07-15 12:40:00.107022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.107026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176f40) on tqpair=0x11352c0 00:15:27.429 [2024-07-15 12:40:00.107042] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:27.429 [2024-07-15 12:40:00.107075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.107082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.107089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.429 [2024-07-15 12:40:00.107098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.107103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.429 [2024-07-15 12:40:00.107107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11352c0) 00:15:27.429 [2024-07-15 12:40:00.107114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.429 [2024-07-15 12:40:00.107141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176f40, cid 4, qid 0 00:15:27.429 [2024-07-15 12:40:00.107149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11770c0, cid 5, qid 0 00:15:27.429 [2024-07-15 12:40:00.107358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.429 [2024-07-15 12:40:00.107369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.430 [2024-07-15 12:40:00.107373] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107377] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11352c0): datao=0, datal=1024, cccid=4 00:15:27.430 [2024-07-15 12:40:00.107383] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1176f40) on tqpair(0x11352c0): expected_datao=0, payload_size=1024 00:15:27.430 [2024-07-15 12:40:00.107388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107395] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107399] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.430 [2024-07-15 12:40:00.107412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.430 [2024-07-15 12:40:00.107416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11770c0) on tqpair=0x11352c0 00:15:27.430 [2024-07-15 12:40:00.107443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.430 [2024-07-15 12:40:00.107452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.430 [2024-07-15 12:40:00.107456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176f40) on tqpair=0x11352c0 00:15:27.430 [2024-07-15 12:40:00.107474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11352c0) 00:15:27.430 [2024-07-15 12:40:00.107488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.430 [2024-07-15 12:40:00.107515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176f40, cid 4, qid 0 00:15:27.430 [2024-07-15 12:40:00.107626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.430 [2024-07-15 12:40:00.107634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.430 [2024-07-15 12:40:00.107638] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107642] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11352c0): datao=0, datal=3072, cccid=4 00:15:27.430 [2024-07-15 12:40:00.107647] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1176f40) on tqpair(0x11352c0): expected_datao=0, payload_size=3072 00:15:27.430 [2024-07-15 12:40:00.107652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107659] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107664] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.430 [2024-07-15 12:40:00.107685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.430 [2024-07-15 12:40:00.107689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176f40) on tqpair=0x11352c0 00:15:27.430 [2024-07-15 12:40:00.107703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11352c0) 00:15:27.430 [2024-07-15 12:40:00.107716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.430 [2024-07-15 12:40:00.107755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176f40, cid 4, qid 0 00:15:27.430 [2024-07-15 12:40:00.107864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.430 [2024-07-15 12:40:00.107871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.430 [2024-07-15 12:40:00.107875] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107879] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11352c0): datao=0, datal=8, cccid=4 00:15:27.430 [2024-07-15 12:40:00.107884] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1176f40) on tqpair(0x11352c0): expected_datao=0, payload_size=8 00:15:27.430 [2024-07-15 12:40:00.107889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107897] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107901] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.430 ===================================================== 00:15:27.430 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:27.430 ===================================================== 00:15:27.430 Controller Capabilities/Features 00:15:27.430 ================================ 00:15:27.430 Vendor ID: 0000 00:15:27.430 Subsystem Vendor ID: 0000 00:15:27.430 Serial Number: .................... 00:15:27.430 Model Number: ........................................ 00:15:27.430 Firmware Version: 24.09 00:15:27.430 Recommended Arb Burst: 0 00:15:27.430 IEEE OUI Identifier: 00 00 00 00:15:27.430 Multi-path I/O 00:15:27.430 May have multiple subsystem ports: No 00:15:27.430 May have multiple controllers: No 00:15:27.430 Associated with SR-IOV VF: No 00:15:27.430 Max Data Transfer Size: 131072 00:15:27.430 Max Number of Namespaces: 0 00:15:27.430 Max Number of I/O Queues: 1024 00:15:27.430 NVMe Specification Version (VS): 1.3 00:15:27.430 NVMe Specification Version (Identify): 1.3 00:15:27.430 Maximum Queue Entries: 128 00:15:27.430 Contiguous Queues Required: Yes 00:15:27.430 Arbitration Mechanisms Supported 00:15:27.430 Weighted Round Robin: Not Supported 00:15:27.430 Vendor Specific: Not Supported 00:15:27.430 Reset Timeout: 15000 ms 00:15:27.430 Doorbell Stride: 4 bytes 00:15:27.430 NVM Subsystem Reset: Not Supported 00:15:27.430 Command Sets Supported 00:15:27.430 NVM Command Set: Supported 00:15:27.430 Boot Partition: Not Supported 00:15:27.430 Memory Page Size Minimum: 4096 bytes 00:15:27.430 Memory Page Size Maximum: 4096 bytes 00:15:27.430 Persistent Memory Region: Not Supported 00:15:27.430 Optional Asynchronous Events Supported 00:15:27.430 Namespace Attribute Notices: Not Supported 00:15:27.430 Firmware Activation Notices: Not Supported 00:15:27.430 ANA Change Notices: Not Supported 00:15:27.430 PLE Aggregate Log Change Notices: Not Supported 00:15:27.430 LBA Status Info Alert Notices: Not Supported 00:15:27.430 EGE Aggregate Log Change Notices: Not Supported 00:15:27.430 Normal NVM Subsystem Shutdown event: Not Supported 00:15:27.430 Zone Descriptor Change Notices: Not Supported 00:15:27.430 Discovery Log Change Notices: Supported 00:15:27.430 Controller Attributes 00:15:27.430 128-bit Host Identifier: Not Supported 00:15:27.430 Non-Operational Permissive Mode: Not Supported 00:15:27.430 NVM Sets: Not Supported 00:15:27.430 Read Recovery Levels: Not Supported 00:15:27.430 Endurance Groups: Not Supported 00:15:27.430 Predictable Latency Mode: Not Supported 00:15:27.430 Traffic Based Keep ALive: Not Supported 00:15:27.430 Namespace Granularity: Not Supported 00:15:27.430 SQ Associations: Not Supported 00:15:27.430 UUID List: Not Supported 00:15:27.430 Multi-Domain Subsystem: Not Supported 00:15:27.430 Fixed Capacity Management: Not Supported 00:15:27.430 Variable Capacity Management: Not Supported 00:15:27.430 Delete Endurance Group: Not Supported 00:15:27.430 Delete NVM Set: Not Supported 00:15:27.430 Extended LBA Formats Supported: Not Supported 00:15:27.430 Flexible Data Placement Supported: Not Supported 00:15:27.430 00:15:27.430 Controller Memory Buffer Support 00:15:27.430 ================================ 00:15:27.430 Supported: No 00:15:27.430 00:15:27.430 Persistent Memory Region Support 00:15:27.430 ================================ 00:15:27.430 Supported: No 00:15:27.430 00:15:27.430 Admin Command Set Attributes 00:15:27.430 ============================ 00:15:27.430 Security Send/Receive: Not Supported 00:15:27.430 Format NVM: Not Supported 00:15:27.430 Firmware Activate/Download: Not Supported 00:15:27.430 Namespace Management: Not Supported 00:15:27.430 Device Self-Test: Not Supported 00:15:27.430 Directives: Not Supported 00:15:27.430 NVMe-MI: Not Supported 00:15:27.430 Virtualization Management: Not Supported 00:15:27.430 Doorbell Buffer Config: Not Supported 00:15:27.430 Get LBA Status Capability: Not Supported 00:15:27.430 Command & Feature Lockdown Capability: Not Supported 00:15:27.430 Abort Command Limit: 1 00:15:27.430 Async Event Request Limit: 4 00:15:27.430 Number of Firmware Slots: N/A 00:15:27.430 Firmware Slot 1 Read-Only: N/A 00:15:27.430 Firm[2024-07-15 12:40:00.107922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.430 [2024-07-15 12:40:00.107930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.430 [2024-07-15 12:40:00.107934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.430 [2024-07-15 12:40:00.107939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176f40) on tqpair=0x11352c0 00:15:27.430 ware Activation Without Reset: N/A 00:15:27.430 Multiple Update Detection Support: N/A 00:15:27.430 Firmware Update Granularity: No Information Provided 00:15:27.430 Per-Namespace SMART Log: No 00:15:27.430 Asymmetric Namespace Access Log Page: Not Supported 00:15:27.430 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:27.430 Command Effects Log Page: Not Supported 00:15:27.430 Get Log Page Extended Data: Supported 00:15:27.430 Telemetry Log Pages: Not Supported 00:15:27.430 Persistent Event Log Pages: Not Supported 00:15:27.430 Supported Log Pages Log Page: May Support 00:15:27.430 Commands Supported & Effects Log Page: Not Supported 00:15:27.430 Feature Identifiers & Effects Log Page:May Support 00:15:27.430 NVMe-MI Commands & Effects Log Page: May Support 00:15:27.430 Data Area 4 for Telemetry Log: Not Supported 00:15:27.430 Error Log Page Entries Supported: 128 00:15:27.430 Keep Alive: Not Supported 00:15:27.430 00:15:27.430 NVM Command Set Attributes 00:15:27.430 ========================== 00:15:27.430 Submission Queue Entry Size 00:15:27.430 Max: 1 00:15:27.430 Min: 1 00:15:27.430 Completion Queue Entry Size 00:15:27.430 Max: 1 00:15:27.430 Min: 1 00:15:27.430 Number of Namespaces: 0 00:15:27.430 Compare Command: Not Supported 00:15:27.430 Write Uncorrectable Command: Not Supported 00:15:27.430 Dataset Management Command: Not Supported 00:15:27.430 Write Zeroes Command: Not Supported 00:15:27.430 Set Features Save Field: Not Supported 00:15:27.430 Reservations: Not Supported 00:15:27.431 Timestamp: Not Supported 00:15:27.431 Copy: Not Supported 00:15:27.431 Volatile Write Cache: Not Present 00:15:27.431 Atomic Write Unit (Normal): 1 00:15:27.431 Atomic Write Unit (PFail): 1 00:15:27.431 Atomic Compare & Write Unit: 1 00:15:27.431 Fused Compare & Write: Supported 00:15:27.431 Scatter-Gather List 00:15:27.431 SGL Command Set: Supported 00:15:27.431 SGL Keyed: Supported 00:15:27.431 SGL Bit Bucket Descriptor: Not Supported 00:15:27.431 SGL Metadata Pointer: Not Supported 00:15:27.431 Oversized SGL: Not Supported 00:15:27.431 SGL Metadata Address: Not Supported 00:15:27.431 SGL Offset: Supported 00:15:27.431 Transport SGL Data Block: Not Supported 00:15:27.431 Replay Protected Memory Block: Not Supported 00:15:27.431 00:15:27.431 Firmware Slot Information 00:15:27.431 ========================= 00:15:27.431 Active slot: 0 00:15:27.431 00:15:27.431 00:15:27.431 Error Log 00:15:27.431 ========= 00:15:27.431 00:15:27.431 Active Namespaces 00:15:27.431 ================= 00:15:27.431 Discovery Log Page 00:15:27.431 ================== 00:15:27.431 Generation Counter: 2 00:15:27.431 Number of Records: 2 00:15:27.431 Record Format: 0 00:15:27.431 00:15:27.431 Discovery Log Entry 0 00:15:27.431 ---------------------- 00:15:27.431 Transport Type: 3 (TCP) 00:15:27.431 Address Family: 1 (IPv4) 00:15:27.431 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:27.431 Entry Flags: 00:15:27.431 Duplicate Returned Information: 1 00:15:27.431 Explicit Persistent Connection Support for Discovery: 1 00:15:27.431 Transport Requirements: 00:15:27.431 Secure Channel: Not Required 00:15:27.431 Port ID: 0 (0x0000) 00:15:27.431 Controller ID: 65535 (0xffff) 00:15:27.431 Admin Max SQ Size: 128 00:15:27.431 Transport Service Identifier: 4420 00:15:27.431 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:27.431 Transport Address: 10.0.0.2 00:15:27.431 Discovery Log Entry 1 00:15:27.431 ---------------------- 00:15:27.431 Transport Type: 3 (TCP) 00:15:27.431 Address Family: 1 (IPv4) 00:15:27.431 Subsystem Type: 2 (NVM Subsystem) 00:15:27.431 Entry Flags: 00:15:27.431 Duplicate Returned Information: 0 00:15:27.431 Explicit Persistent Connection Support for Discovery: 0 00:15:27.431 Transport Requirements: 00:15:27.431 Secure Channel: Not Required 00:15:27.431 Port ID: 0 (0x0000) 00:15:27.431 Controller ID: 65535 (0xffff) 00:15:27.431 Admin Max SQ Size: 128 00:15:27.431 Transport Service Identifier: 4420 00:15:27.431 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:27.431 Transport Address: 10.0.0.2 [2024-07-15 12:40:00.108081] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:27.431 [2024-07-15 12:40:00.108099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176940) on tqpair=0x11352c0 00:15:27.431 [2024-07-15 12:40:00.108107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.431 [2024-07-15 12:40:00.108113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176ac0) on tqpair=0x11352c0 00:15:27.431 [2024-07-15 12:40:00.108118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.431 [2024-07-15 12:40:00.108124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176c40) on tqpair=0x11352c0 00:15:27.431 [2024-07-15 12:40:00.108129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.431 [2024-07-15 12:40:00.108135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176dc0) on tqpair=0x11352c0 00:15:27.431 [2024-07-15 12:40:00.108140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.431 [2024-07-15 12:40:00.108151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.431 [2024-07-15 12:40:00.108156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.431 [2024-07-15 12:40:00.108160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11352c0) 00:15:27.431 [2024-07-15 12:40:00.108169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.431 [2024-07-15 12:40:00.108196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176dc0, cid 3, qid 0 00:15:27.431 [2024-07-15 12:40:00.108283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.431 [2024-07-15 12:40:00.108301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.431 [2024-07-15 12:40:00.108306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.431 [2024-07-15 12:40:00.108311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176dc0) on tqpair=0x11352c0 00:15:27.431 [2024-07-15 12:40:00.108319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.431 [2024-07-15 12:40:00.108324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.431 [2024-07-15 12:40:00.108328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11352c0) 00:15:27.431 [2024-07-15 12:40:00.108336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.431 [2024-07-15 12:40:00.108361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176dc0, cid 3, qid 0 00:15:27.695 [2024-07-15 12:40:00.108463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.695 [2024-07-15 12:40:00.108470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.695 [2024-07-15 12:40:00.108474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.695 [2024-07-15 12:40:00.108478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176dc0) on tqpair=0x11352c0 00:15:27.695 [2024-07-15 12:40:00.108484] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:27.695 [2024-07-15 12:40:00.108490] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:27.695 [2024-07-15 12:40:00.108500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.695 [2024-07-15 12:40:00.108505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.695 [2024-07-15 12:40:00.108510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11352c0) 00:15:27.695 [2024-07-15 12:40:00.108517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.695 [2024-07-15 12:40:00.108536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176dc0, cid 3, qid 0 00:15:27.695 [2024-07-15 12:40:00.108608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.695 [2024-07-15 12:40:00.108615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.695 [2024-07-15 12:40:00.108619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.695 [2024-07-15 12:40:00.108623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176dc0) on tqpair=0x11352c0 00:15:27.695 [2024-07-15 12:40:00.108635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.695 [2024-07-15 12:40:00.108640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.695 [2024-07-15 12:40:00.108644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11352c0) 00:15:27.695 [2024-07-15 12:40:00.108652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.695 [2024-07-15 12:40:00.108670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176dc0, cid 3, qid 0 00:15:27.696 [2024-07-15 12:40:00.112759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.696 [2024-07-15 12:40:00.112779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.696 [2024-07-15 12:40:00.112784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.112790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176dc0) on tqpair=0x11352c0 00:15:27.696 [2024-07-15 12:40:00.112803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.112809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.112813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11352c0) 00:15:27.696 [2024-07-15 12:40:00.112822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.696 [2024-07-15 12:40:00.112848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1176dc0, cid 3, qid 0 00:15:27.696 [2024-07-15 12:40:00.112943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.696 [2024-07-15 12:40:00.112951] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.696 [2024-07-15 12:40:00.112955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.112959] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1176dc0) on tqpair=0x11352c0 00:15:27.696 [2024-07-15 12:40:00.112968] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:15:27.696 00:15:27.696 12:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:27.696 [2024-07-15 12:40:00.160060] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:27.696 [2024-07-15 12:40:00.160126] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74936 ] 00:15:27.696 [2024-07-15 12:40:00.301199] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:27.696 [2024-07-15 12:40:00.301288] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:27.696 [2024-07-15 12:40:00.301296] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:27.696 [2024-07-15 12:40:00.301312] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:27.696 [2024-07-15 12:40:00.301320] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:27.696 [2024-07-15 12:40:00.301488] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:27.696 [2024-07-15 12:40:00.301555] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb7d2c0 0 00:15:27.696 [2024-07-15 12:40:00.313783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:27.696 [2024-07-15 12:40:00.313806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:27.696 [2024-07-15 12:40:00.313812] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:27.696 [2024-07-15 12:40:00.313816] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:27.696 [2024-07-15 12:40:00.313871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.313879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.313884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.696 [2024-07-15 12:40:00.313900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:27.696 [2024-07-15 12:40:00.313932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.696 [2024-07-15 12:40:00.321780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.696 [2024-07-15 12:40:00.321800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.696 [2024-07-15 12:40:00.321805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.321811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.696 [2024-07-15 12:40:00.321826] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:27.696 [2024-07-15 12:40:00.321836] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:27.696 [2024-07-15 12:40:00.321844] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:27.696 [2024-07-15 12:40:00.321867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.321874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.321878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.696 [2024-07-15 12:40:00.321888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.696 [2024-07-15 12:40:00.321916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.696 [2024-07-15 12:40:00.321988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.696 [2024-07-15 12:40:00.321996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.696 [2024-07-15 12:40:00.322000] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.696 [2024-07-15 12:40:00.322011] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:27.696 [2024-07-15 12:40:00.322020] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:27.696 [2024-07-15 12:40:00.322029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.696 [2024-07-15 12:40:00.322046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.696 [2024-07-15 12:40:00.322066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.696 [2024-07-15 12:40:00.322215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.696 [2024-07-15 12:40:00.322223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.696 [2024-07-15 12:40:00.322227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.696 [2024-07-15 12:40:00.322237] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:27.696 [2024-07-15 12:40:00.322246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:27.696 [2024-07-15 12:40:00.322255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.696 [2024-07-15 12:40:00.322271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.696 [2024-07-15 12:40:00.322290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.696 [2024-07-15 12:40:00.322462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.696 [2024-07-15 12:40:00.322469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.696 [2024-07-15 12:40:00.322473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.696 [2024-07-15 12:40:00.322484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:27.696 [2024-07-15 12:40:00.322495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.696 [2024-07-15 12:40:00.322512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.696 [2024-07-15 12:40:00.322530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.696 [2024-07-15 12:40:00.322911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.696 [2024-07-15 12:40:00.322925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.696 [2024-07-15 12:40:00.322930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.322935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.696 [2024-07-15 12:40:00.322940] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:27.696 [2024-07-15 12:40:00.322946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:27.696 [2024-07-15 12:40:00.322955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:27.696 [2024-07-15 12:40:00.323063] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:27.696 [2024-07-15 12:40:00.323068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:27.696 [2024-07-15 12:40:00.323079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.323084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.323088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.696 [2024-07-15 12:40:00.323096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.696 [2024-07-15 12:40:00.323118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.696 [2024-07-15 12:40:00.323241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.696 [2024-07-15 12:40:00.323249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.696 [2024-07-15 12:40:00.323253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.696 [2024-07-15 12:40:00.323257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.697 [2024-07-15 12:40:00.323263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:27.697 [2024-07-15 12:40:00.323273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.323279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.323283] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.323290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.697 [2024-07-15 12:40:00.323310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.697 [2024-07-15 12:40:00.323700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.697 [2024-07-15 12:40:00.323713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.697 [2024-07-15 12:40:00.323718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.323722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.697 [2024-07-15 12:40:00.323757] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:27.697 [2024-07-15 12:40:00.323765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.323775] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:27.697 [2024-07-15 12:40:00.323787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.323800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.323805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.323813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.697 [2024-07-15 12:40:00.323836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.697 [2024-07-15 12:40:00.324025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.697 [2024-07-15 12:40:00.324033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.697 [2024-07-15 12:40:00.324037] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324041] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb7d2c0): datao=0, datal=4096, cccid=0 00:15:27.697 [2024-07-15 12:40:00.324047] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbe940) on tqpair(0xb7d2c0): expected_datao=0, payload_size=4096 00:15:27.697 [2024-07-15 12:40:00.324053] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324062] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324067] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.697 [2024-07-15 12:40:00.324203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.697 [2024-07-15 12:40:00.324207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.697 [2024-07-15 12:40:00.324221] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:27.697 [2024-07-15 12:40:00.324227] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:27.697 [2024-07-15 12:40:00.324232] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:27.697 [2024-07-15 12:40:00.324237] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:27.697 [2024-07-15 12:40:00.324242] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:27.697 [2024-07-15 12:40:00.324248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.324258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.324266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.324284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.697 [2024-07-15 12:40:00.324350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.697 [2024-07-15 12:40:00.324758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.697 [2024-07-15 12:40:00.324772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.697 [2024-07-15 12:40:00.324778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.697 [2024-07-15 12:40:00.324791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.324807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.697 [2024-07-15 12:40:00.324815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.324830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.697 [2024-07-15 12:40:00.324837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.324852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.697 [2024-07-15 12:40:00.324859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.324874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.697 [2024-07-15 12:40:00.324880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.324896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.324905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.324909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.324917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.697 [2024-07-15 12:40:00.324942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbe940, cid 0, qid 0 00:15:27.697 [2024-07-15 12:40:00.324949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbeac0, cid 1, qid 0 00:15:27.697 [2024-07-15 12:40:00.324955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbec40, cid 2, qid 0 00:15:27.697 [2024-07-15 12:40:00.324960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbedc0, cid 3, qid 0 00:15:27.697 [2024-07-15 12:40:00.324965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbef40, cid 4, qid 0 00:15:27.697 [2024-07-15 12:40:00.325326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.697 [2024-07-15 12:40:00.325340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.697 [2024-07-15 12:40:00.325345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.325349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbef40) on tqpair=0xb7d2c0 00:15:27.697 [2024-07-15 12:40:00.325355] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:27.697 [2024-07-15 12:40:00.325366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.325376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.325383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.325391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.325395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.325399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.325407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.697 [2024-07-15 12:40:00.325428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbef40, cid 4, qid 0 00:15:27.697 [2024-07-15 12:40:00.325564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.697 [2024-07-15 12:40:00.325571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.697 [2024-07-15 12:40:00.325575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.325579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbef40) on tqpair=0xb7d2c0 00:15:27.697 [2024-07-15 12:40:00.325641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.325652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:27.697 [2024-07-15 12:40:00.325662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.325666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb7d2c0) 00:15:27.697 [2024-07-15 12:40:00.325674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.697 [2024-07-15 12:40:00.325695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbef40, cid 4, qid 0 00:15:27.697 [2024-07-15 12:40:00.329755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.697 [2024-07-15 12:40:00.329774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.697 [2024-07-15 12:40:00.329779] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.329784] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb7d2c0): datao=0, datal=4096, cccid=4 00:15:27.697 [2024-07-15 12:40:00.329798] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbef40) on tqpair(0xb7d2c0): expected_datao=0, payload_size=4096 00:15:27.697 [2024-07-15 12:40:00.329803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.329812] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.329817] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.329823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.697 [2024-07-15 12:40:00.329830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.697 [2024-07-15 12:40:00.329834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.697 [2024-07-15 12:40:00.329839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbef40) on tqpair=0xb7d2c0 00:15:27.698 [2024-07-15 12:40:00.329858] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:27.698 [2024-07-15 12:40:00.329872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.329887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.329897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.329902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.329910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.329938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbef40, cid 4, qid 0 00:15:27.698 [2024-07-15 12:40:00.330027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.698 [2024-07-15 12:40:00.330034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.698 [2024-07-15 12:40:00.330038] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330042] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb7d2c0): datao=0, datal=4096, cccid=4 00:15:27.698 [2024-07-15 12:40:00.330047] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbef40) on tqpair(0xb7d2c0): expected_datao=0, payload_size=4096 00:15:27.698 [2024-07-15 12:40:00.330053] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330060] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330065] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.698 [2024-07-15 12:40:00.330152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.698 [2024-07-15 12:40:00.330156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbef40) on tqpair=0xb7d2c0 00:15:27.698 [2024-07-15 12:40:00.330178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.330191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.330200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.330213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.330235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbef40, cid 4, qid 0 00:15:27.698 [2024-07-15 12:40:00.330472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.698 [2024-07-15 12:40:00.330480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.698 [2024-07-15 12:40:00.330484] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330488] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb7d2c0): datao=0, datal=4096, cccid=4 00:15:27.698 [2024-07-15 12:40:00.330493] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbef40) on tqpair(0xb7d2c0): expected_datao=0, payload_size=4096 00:15:27.698 [2024-07-15 12:40:00.330498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330506] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330511] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.698 [2024-07-15 12:40:00.330629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.698 [2024-07-15 12:40:00.330634] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbef40) on tqpair=0xb7d2c0 00:15:27.698 [2024-07-15 12:40:00.330648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.330657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.330670] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.330677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.330683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.330690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.330696] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:27.698 [2024-07-15 12:40:00.330701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:27.698 [2024-07-15 12:40:00.330707] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:27.698 [2024-07-15 12:40:00.330726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.330755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.330763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.330771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.330778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.698 [2024-07-15 12:40:00.330807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbef40, cid 4, qid 0 00:15:27.698 [2024-07-15 12:40:00.330815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbf0c0, cid 5, qid 0 00:15:27.698 [2024-07-15 12:40:00.331149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.698 [2024-07-15 12:40:00.331164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.698 [2024-07-15 12:40:00.331169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbef40) on tqpair=0xb7d2c0 00:15:27.698 [2024-07-15 12:40:00.331181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.698 [2024-07-15 12:40:00.331188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.698 [2024-07-15 12:40:00.331192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbf0c0) on tqpair=0xb7d2c0 00:15:27.698 [2024-07-15 12:40:00.331208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.331221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.331242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbf0c0, cid 5, qid 0 00:15:27.698 [2024-07-15 12:40:00.331305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.698 [2024-07-15 12:40:00.331313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.698 [2024-07-15 12:40:00.331317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbf0c0) on tqpair=0xb7d2c0 00:15:27.698 [2024-07-15 12:40:00.331332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.331345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.331364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbf0c0, cid 5, qid 0 00:15:27.698 [2024-07-15 12:40:00.331442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.698 [2024-07-15 12:40:00.331450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.698 [2024-07-15 12:40:00.331454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbf0c0) on tqpair=0xb7d2c0 00:15:27.698 [2024-07-15 12:40:00.331469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.331482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.331501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbf0c0, cid 5, qid 0 00:15:27.698 [2024-07-15 12:40:00.331740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.698 [2024-07-15 12:40:00.331751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.698 [2024-07-15 12:40:00.331755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbf0c0) on tqpair=0xb7d2c0 00:15:27.698 [2024-07-15 12:40:00.331781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.331795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.331804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.331816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.331824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.331835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.331848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.698 [2024-07-15 12:40:00.331853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb7d2c0) 00:15:27.698 [2024-07-15 12:40:00.331860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.698 [2024-07-15 12:40:00.331884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbf0c0, cid 5, qid 0 00:15:27.698 [2024-07-15 12:40:00.331892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbef40, cid 4, qid 0 00:15:27.698 [2024-07-15 12:40:00.331897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbf240, cid 6, qid 0 00:15:27.698 [2024-07-15 12:40:00.331902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbf3c0, cid 7, qid 0 00:15:27.698 [2024-07-15 12:40:00.332437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.698 [2024-07-15 12:40:00.332452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.698 [2024-07-15 12:40:00.332457] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332461] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb7d2c0): datao=0, datal=8192, cccid=5 00:15:27.699 [2024-07-15 12:40:00.332466] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbf0c0) on tqpair(0xb7d2c0): expected_datao=0, payload_size=8192 00:15:27.699 [2024-07-15 12:40:00.332472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332491] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332497] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.699 [2024-07-15 12:40:00.332510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.699 [2024-07-15 12:40:00.332514] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332518] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb7d2c0): datao=0, datal=512, cccid=4 00:15:27.699 [2024-07-15 12:40:00.332523] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbef40) on tqpair(0xb7d2c0): expected_datao=0, payload_size=512 00:15:27.699 [2024-07-15 12:40:00.332528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332535] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332539] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.699 [2024-07-15 12:40:00.332551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.699 [2024-07-15 12:40:00.332555] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332559] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb7d2c0): datao=0, datal=512, cccid=6 00:15:27.699 [2024-07-15 12:40:00.332564] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbf240) on tqpair(0xb7d2c0): expected_datao=0, payload_size=512 00:15:27.699 [2024-07-15 12:40:00.332568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332575] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332579] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.699 [2024-07-15 12:40:00.332591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.699 [2024-07-15 12:40:00.332595] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332599] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb7d2c0): datao=0, datal=4096, cccid=7 00:15:27.699 [2024-07-15 12:40:00.332603] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbf3c0) on tqpair(0xb7d2c0): expected_datao=0, payload_size=4096 00:15:27.699 [2024-07-15 12:40:00.332608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332615] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332620] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.699 [2024-07-15 12:40:00.332635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.699 [2024-07-15 12:40:00.332639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbf0c0) on tqpair=0xb7d2c0 00:15:27.699 [2024-07-15 12:40:00.332663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.699 [2024-07-15 12:40:00.332671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.699 [2024-07-15 12:40:00.332675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbef40) on tqpair=0xb7d2c0 00:15:27.699 [2024-07-15 12:40:00.332693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.699 [2024-07-15 12:40:00.332700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.699 [2024-07-15 12:40:00.332705] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbf240) on tqpair=0xb7d2c0 00:15:27.699 [2024-07-15 12:40:00.332717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.699 [2024-07-15 12:40:00.332723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.699 [2024-07-15 12:40:00.332739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.699 [2024-07-15 12:40:00.332748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbf3c0) on tqpair=0xb7d2c0 00:15:27.699 ===================================================== 00:15:27.699 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.699 ===================================================== 00:15:27.699 Controller Capabilities/Features 00:15:27.699 ================================ 00:15:27.699 Vendor ID: 8086 00:15:27.699 Subsystem Vendor ID: 8086 00:15:27.699 Serial Number: SPDK00000000000001 00:15:27.699 Model Number: SPDK bdev Controller 00:15:27.699 Firmware Version: 24.09 00:15:27.699 Recommended Arb Burst: 6 00:15:27.699 IEEE OUI Identifier: e4 d2 5c 00:15:27.699 Multi-path I/O 00:15:27.699 May have multiple subsystem ports: Yes 00:15:27.699 May have multiple controllers: Yes 00:15:27.699 Associated with SR-IOV VF: No 00:15:27.699 Max Data Transfer Size: 131072 00:15:27.699 Max Number of Namespaces: 32 00:15:27.699 Max Number of I/O Queues: 127 00:15:27.699 NVMe Specification Version (VS): 1.3 00:15:27.699 NVMe Specification Version (Identify): 1.3 00:15:27.699 Maximum Queue Entries: 128 00:15:27.699 Contiguous Queues Required: Yes 00:15:27.699 Arbitration Mechanisms Supported 00:15:27.699 Weighted Round Robin: Not Supported 00:15:27.699 Vendor Specific: Not Supported 00:15:27.699 Reset Timeout: 15000 ms 00:15:27.699 Doorbell Stride: 4 bytes 00:15:27.699 NVM Subsystem Reset: Not Supported 00:15:27.699 Command Sets Supported 00:15:27.699 NVM Command Set: Supported 00:15:27.699 Boot Partition: Not Supported 00:15:27.699 Memory Page Size Minimum: 4096 bytes 00:15:27.699 Memory Page Size Maximum: 4096 bytes 00:15:27.699 Persistent Memory Region: Not Supported 00:15:27.699 Optional Asynchronous Events Supported 00:15:27.699 Namespace Attribute Notices: Supported 00:15:27.699 Firmware Activation Notices: Not Supported 00:15:27.699 ANA Change Notices: Not Supported 00:15:27.699 PLE Aggregate Log Change Notices: Not Supported 00:15:27.699 LBA Status Info Alert Notices: Not Supported 00:15:27.699 EGE Aggregate Log Change Notices: Not Supported 00:15:27.699 Normal NVM Subsystem Shutdown event: Not Supported 00:15:27.699 Zone Descriptor Change Notices: Not Supported 00:15:27.699 Discovery Log Change Notices: Not Supported 00:15:27.699 Controller Attributes 00:15:27.699 128-bit Host Identifier: Supported 00:15:27.699 Non-Operational Permissive Mode: Not Supported 00:15:27.699 NVM Sets: Not Supported 00:15:27.699 Read Recovery Levels: Not Supported 00:15:27.699 Endurance Groups: Not Supported 00:15:27.699 Predictable Latency Mode: Not Supported 00:15:27.699 Traffic Based Keep ALive: Not Supported 00:15:27.699 Namespace Granularity: Not Supported 00:15:27.699 SQ Associations: Not Supported 00:15:27.699 UUID List: Not Supported 00:15:27.699 Multi-Domain Subsystem: Not Supported 00:15:27.699 Fixed Capacity Management: Not Supported 00:15:27.699 Variable Capacity Management: Not Supported 00:15:27.699 Delete Endurance Group: Not Supported 00:15:27.699 Delete NVM Set: Not Supported 00:15:27.699 Extended LBA Formats Supported: Not Supported 00:15:27.699 Flexible Data Placement Supported: Not Supported 00:15:27.699 00:15:27.699 Controller Memory Buffer Support 00:15:27.699 ================================ 00:15:27.699 Supported: No 00:15:27.699 00:15:27.699 Persistent Memory Region Support 00:15:27.699 ================================ 00:15:27.699 Supported: No 00:15:27.699 00:15:27.699 Admin Command Set Attributes 00:15:27.699 ============================ 00:15:27.699 Security Send/Receive: Not Supported 00:15:27.699 Format NVM: Not Supported 00:15:27.699 Firmware Activate/Download: Not Supported 00:15:27.699 Namespace Management: Not Supported 00:15:27.699 Device Self-Test: Not Supported 00:15:27.699 Directives: Not Supported 00:15:27.699 NVMe-MI: Not Supported 00:15:27.699 Virtualization Management: Not Supported 00:15:27.699 Doorbell Buffer Config: Not Supported 00:15:27.699 Get LBA Status Capability: Not Supported 00:15:27.699 Command & Feature Lockdown Capability: Not Supported 00:15:27.699 Abort Command Limit: 4 00:15:27.699 Async Event Request Limit: 4 00:15:27.699 Number of Firmware Slots: N/A 00:15:27.699 Firmware Slot 1 Read-Only: N/A 00:15:27.699 Firmware Activation Without Reset: N/A 00:15:27.699 Multiple Update Detection Support: N/A 00:15:27.699 Firmware Update Granularity: No Information Provided 00:15:27.699 Per-Namespace SMART Log: No 00:15:27.699 Asymmetric Namespace Access Log Page: Not Supported 00:15:27.699 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:27.699 Command Effects Log Page: Supported 00:15:27.699 Get Log Page Extended Data: Supported 00:15:27.699 Telemetry Log Pages: Not Supported 00:15:27.699 Persistent Event Log Pages: Not Supported 00:15:27.699 Supported Log Pages Log Page: May Support 00:15:27.699 Commands Supported & Effects Log Page: Not Supported 00:15:27.699 Feature Identifiers & Effects Log Page:May Support 00:15:27.699 NVMe-MI Commands & Effects Log Page: May Support 00:15:27.699 Data Area 4 for Telemetry Log: Not Supported 00:15:27.699 Error Log Page Entries Supported: 128 00:15:27.699 Keep Alive: Supported 00:15:27.699 Keep Alive Granularity: 10000 ms 00:15:27.699 00:15:27.699 NVM Command Set Attributes 00:15:27.699 ========================== 00:15:27.699 Submission Queue Entry Size 00:15:27.699 Max: 64 00:15:27.699 Min: 64 00:15:27.699 Completion Queue Entry Size 00:15:27.699 Max: 16 00:15:27.699 Min: 16 00:15:27.699 Number of Namespaces: 32 00:15:27.699 Compare Command: Supported 00:15:27.699 Write Uncorrectable Command: Not Supported 00:15:27.699 Dataset Management Command: Supported 00:15:27.699 Write Zeroes Command: Supported 00:15:27.699 Set Features Save Field: Not Supported 00:15:27.699 Reservations: Supported 00:15:27.699 Timestamp: Not Supported 00:15:27.699 Copy: Supported 00:15:27.699 Volatile Write Cache: Present 00:15:27.700 Atomic Write Unit (Normal): 1 00:15:27.700 Atomic Write Unit (PFail): 1 00:15:27.700 Atomic Compare & Write Unit: 1 00:15:27.700 Fused Compare & Write: Supported 00:15:27.700 Scatter-Gather List 00:15:27.700 SGL Command Set: Supported 00:15:27.700 SGL Keyed: Supported 00:15:27.700 SGL Bit Bucket Descriptor: Not Supported 00:15:27.700 SGL Metadata Pointer: Not Supported 00:15:27.700 Oversized SGL: Not Supported 00:15:27.700 SGL Metadata Address: Not Supported 00:15:27.700 SGL Offset: Supported 00:15:27.700 Transport SGL Data Block: Not Supported 00:15:27.700 Replay Protected Memory Block: Not Supported 00:15:27.700 00:15:27.700 Firmware Slot Information 00:15:27.700 ========================= 00:15:27.700 Active slot: 1 00:15:27.700 Slot 1 Firmware Revision: 24.09 00:15:27.700 00:15:27.700 00:15:27.700 Commands Supported and Effects 00:15:27.700 ============================== 00:15:27.700 Admin Commands 00:15:27.700 -------------- 00:15:27.700 Get Log Page (02h): Supported 00:15:27.700 Identify (06h): Supported 00:15:27.700 Abort (08h): Supported 00:15:27.700 Set Features (09h): Supported 00:15:27.700 Get Features (0Ah): Supported 00:15:27.700 Asynchronous Event Request (0Ch): Supported 00:15:27.700 Keep Alive (18h): Supported 00:15:27.700 I/O Commands 00:15:27.700 ------------ 00:15:27.700 Flush (00h): Supported LBA-Change 00:15:27.700 Write (01h): Supported LBA-Change 00:15:27.700 Read (02h): Supported 00:15:27.700 Compare (05h): Supported 00:15:27.700 Write Zeroes (08h): Supported LBA-Change 00:15:27.700 Dataset Management (09h): Supported LBA-Change 00:15:27.700 Copy (19h): Supported LBA-Change 00:15:27.700 00:15:27.700 Error Log 00:15:27.700 ========= 00:15:27.700 00:15:27.700 Arbitration 00:15:27.700 =========== 00:15:27.700 Arbitration Burst: 1 00:15:27.700 00:15:27.700 Power Management 00:15:27.700 ================ 00:15:27.700 Number of Power States: 1 00:15:27.700 Current Power State: Power State #0 00:15:27.700 Power State #0: 00:15:27.700 Max Power: 0.00 W 00:15:27.700 Non-Operational State: Operational 00:15:27.700 Entry Latency: Not Reported 00:15:27.700 Exit Latency: Not Reported 00:15:27.700 Relative Read Throughput: 0 00:15:27.700 Relative Read Latency: 0 00:15:27.700 Relative Write Throughput: 0 00:15:27.700 Relative Write Latency: 0 00:15:27.700 Idle Power: Not Reported 00:15:27.700 Active Power: Not Reported 00:15:27.700 Non-Operational Permissive Mode: Not Supported 00:15:27.700 00:15:27.700 Health Information 00:15:27.700 ================== 00:15:27.700 Critical Warnings: 00:15:27.700 Available Spare Space: OK 00:15:27.700 Temperature: OK 00:15:27.700 Device Reliability: OK 00:15:27.700 Read Only: No 00:15:27.700 Volatile Memory Backup: OK 00:15:27.700 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:27.700 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:27.700 Available Spare: 0% 00:15:27.700 Available Spare Threshold: 0% 00:15:27.700 Life Percentage Used:[2024-07-15 12:40:00.332865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.332873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb7d2c0) 00:15:27.700 [2024-07-15 12:40:00.332882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.700 [2024-07-15 12:40:00.332908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbf3c0, cid 7, qid 0 00:15:27.700 [2024-07-15 12:40:00.333019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.700 [2024-07-15 12:40:00.333027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.700 [2024-07-15 12:40:00.333031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.333035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbf3c0) on tqpair=0xb7d2c0 00:15:27.700 [2024-07-15 12:40:00.333075] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:27.700 [2024-07-15 12:40:00.333088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbe940) on tqpair=0xb7d2c0 00:15:27.700 [2024-07-15 12:40:00.333096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.700 [2024-07-15 12:40:00.333102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbeac0) on tqpair=0xb7d2c0 00:15:27.700 [2024-07-15 12:40:00.333107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.700 [2024-07-15 12:40:00.333113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbec40) on tqpair=0xb7d2c0 00:15:27.700 [2024-07-15 12:40:00.333118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.700 [2024-07-15 12:40:00.333124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbedc0) on tqpair=0xb7d2c0 00:15:27.700 [2024-07-15 12:40:00.333129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.700 [2024-07-15 12:40:00.333138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.333144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.333148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb7d2c0) 00:15:27.700 [2024-07-15 12:40:00.333156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.700 [2024-07-15 12:40:00.333180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbedc0, cid 3, qid 0 00:15:27.700 [2024-07-15 12:40:00.333323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.700 [2024-07-15 12:40:00.333331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.700 [2024-07-15 12:40:00.333335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.333339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbedc0) on tqpair=0xb7d2c0 00:15:27.700 [2024-07-15 12:40:00.333347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.333352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.333356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb7d2c0) 00:15:27.700 [2024-07-15 12:40:00.333364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.700 [2024-07-15 12:40:00.333387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbedc0, cid 3, qid 0 00:15:27.700 [2024-07-15 12:40:00.337756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.700 [2024-07-15 12:40:00.337776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.700 [2024-07-15 12:40:00.337782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.337787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbedc0) on tqpair=0xb7d2c0 00:15:27.700 [2024-07-15 12:40:00.337793] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:27.700 [2024-07-15 12:40:00.337799] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:27.700 [2024-07-15 12:40:00.337812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.337819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.337823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb7d2c0) 00:15:27.700 [2024-07-15 12:40:00.337832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.700 [2024-07-15 12:40:00.337858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbedc0, cid 3, qid 0 00:15:27.700 [2024-07-15 12:40:00.337917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.700 [2024-07-15 12:40:00.337925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.700 [2024-07-15 12:40:00.337929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.700 [2024-07-15 12:40:00.337933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbedc0) on tqpair=0xb7d2c0 00:15:27.700 [2024-07-15 12:40:00.337942] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:15:27.700 0% 00:15:27.700 Data Units Read: 0 00:15:27.700 Data Units Written: 0 00:15:27.700 Host Read Commands: 0 00:15:27.700 Host Write Commands: 0 00:15:27.700 Controller Busy Time: 0 minutes 00:15:27.700 Power Cycles: 0 00:15:27.700 Power On Hours: 0 hours 00:15:27.700 Unsafe Shutdowns: 0 00:15:27.700 Unrecoverable Media Errors: 0 00:15:27.700 Lifetime Error Log Entries: 0 00:15:27.700 Warning Temperature Time: 0 minutes 00:15:27.700 Critical Temperature Time: 0 minutes 00:15:27.700 00:15:27.700 Number of Queues 00:15:27.701 ================ 00:15:27.701 Number of I/O Submission Queues: 127 00:15:27.701 Number of I/O Completion Queues: 127 00:15:27.701 00:15:27.701 Active Namespaces 00:15:27.701 ================= 00:15:27.701 Namespace ID:1 00:15:27.701 Error Recovery Timeout: Unlimited 00:15:27.701 Command Set Identifier: NVM (00h) 00:15:27.701 Deallocate: Supported 00:15:27.701 Deallocated/Unwritten Error: Not Supported 00:15:27.701 Deallocated Read Value: Unknown 00:15:27.701 Deallocate in Write Zeroes: Not Supported 00:15:27.701 Deallocated Guard Field: 0xFFFF 00:15:27.701 Flush: Supported 00:15:27.701 Reservation: Supported 00:15:27.701 Namespace Sharing Capabilities: Multiple Controllers 00:15:27.701 Size (in LBAs): 131072 (0GiB) 00:15:27.701 Capacity (in LBAs): 131072 (0GiB) 00:15:27.701 Utilization (in LBAs): 131072 (0GiB) 00:15:27.701 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:27.701 EUI64: ABCDEF0123456789 00:15:27.701 UUID: aa8d378a-7edb-40af-b178-dd47e1f31b92 00:15:27.701 Thin Provisioning: Not Supported 00:15:27.701 Per-NS Atomic Units: Yes 00:15:27.701 Atomic Boundary Size (Normal): 0 00:15:27.701 Atomic Boundary Size (PFail): 0 00:15:27.701 Atomic Boundary Offset: 0 00:15:27.701 Maximum Single Source Range Length: 65535 00:15:27.701 Maximum Copy Length: 65535 00:15:27.701 Maximum Source Range Count: 1 00:15:27.701 NGUID/EUI64 Never Reused: No 00:15:27.701 Namespace Write Protected: No 00:15:27.701 Number of LBA Formats: 1 00:15:27.701 Current LBA Format: LBA Format #00 00:15:27.701 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:27.701 00:15:27.701 12:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.959 rmmod nvme_tcp 00:15:27.959 rmmod nvme_fabrics 00:15:27.959 rmmod nvme_keyring 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74892 ']' 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74892 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74892 ']' 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74892 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74892 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.959 killing process with pid 74892 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74892' 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74892 00:15:27.959 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74892 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:28.218 00:15:28.218 real 0m2.604s 00:15:28.218 user 0m7.379s 00:15:28.218 sys 0m0.687s 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.218 12:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:28.218 ************************************ 00:15:28.218 END TEST nvmf_identify 00:15:28.218 ************************************ 00:15:28.218 12:40:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.218 12:40:00 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:28.218 12:40:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.218 12:40:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.218 12:40:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.218 ************************************ 00:15:28.218 START TEST nvmf_perf 00:15:28.218 ************************************ 00:15:28.218 12:40:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:28.477 * Looking for test storage... 00:15:28.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:28.477 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:28.478 12:40:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:28.478 Cannot find device "nvmf_tgt_br" 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.478 Cannot find device "nvmf_tgt_br2" 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:28.478 Cannot find device "nvmf_tgt_br" 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:28.478 Cannot find device "nvmf_tgt_br2" 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:28.478 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:28.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:15:28.737 00:15:28.737 --- 10.0.0.2 ping statistics --- 00:15:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.737 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:28.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:28.737 00:15:28.737 --- 10.0.0.3 ping statistics --- 00:15:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.737 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:28.737 00:15:28.737 --- 10.0.0.1 ping statistics --- 00:15:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.737 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75100 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75100 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75100 ']' 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.737 12:40:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:28.995 [2024-07-15 12:40:01.435212] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:28.995 [2024-07-15 12:40:01.435312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.995 [2024-07-15 12:40:01.573587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.253 [2024-07-15 12:40:01.690442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.253 [2024-07-15 12:40:01.690713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.253 [2024-07-15 12:40:01.690750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.253 [2024-07-15 12:40:01.690761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.253 [2024-07-15 12:40:01.690768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.253 [2024-07-15 12:40:01.690891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.253 [2024-07-15 12:40:01.691064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.253 [2024-07-15 12:40:01.691516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.253 [2024-07-15 12:40:01.691526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.253 [2024-07-15 12:40:01.748157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:29.822 12:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.822 12:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:15:29.822 12:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.822 12:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.822 12:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:30.081 12:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.081 12:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:30.081 12:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:30.347 12:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:30.347 12:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:30.625 12:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:30.625 12:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:30.883 12:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:30.884 12:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:30.884 12:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:30.884 12:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:30.884 12:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:31.141 [2024-07-15 12:40:03.747840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.141 12:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:31.400 12:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:31.400 12:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:31.657 12:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:31.657 12:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:31.915 12:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.173 [2024-07-15 12:40:04.766269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.173 12:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.432 12:40:05 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:32.432 12:40:05 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:32.432 12:40:05 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:32.432 12:40:05 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:33.807 Initializing NVMe Controllers 00:15:33.807 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:33.807 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:33.807 Initialization complete. Launching workers. 00:15:33.807 ======================================================== 00:15:33.807 Latency(us) 00:15:33.807 Device Information : IOPS MiB/s Average min max 00:15:33.807 PCIE (0000:00:10.0) NSID 1 from core 0: 22811.37 89.11 1402.30 374.96 8162.44 00:15:33.807 ======================================================== 00:15:33.807 Total : 22811.37 89.11 1402.30 374.96 8162.44 00:15:33.807 00:15:33.807 12:40:06 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:34.741 Initializing NVMe Controllers 00:15:34.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:34.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:34.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:34.741 Initialization complete. Launching workers. 00:15:34.741 ======================================================== 00:15:34.741 Latency(us) 00:15:34.741 Device Information : IOPS MiB/s Average min max 00:15:34.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3210.00 12.54 311.23 114.27 5183.13 00:15:34.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8104.08 5992.36 12036.37 00:15:34.741 ======================================================== 00:15:34.741 Total : 3334.00 13.02 601.06 114.27 12036.37 00:15:34.741 00:15:34.999 12:40:07 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:36.376 Initializing NVMe Controllers 00:15:36.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:36.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:36.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:36.376 Initialization complete. Launching workers. 00:15:36.376 ======================================================== 00:15:36.376 Latency(us) 00:15:36.376 Device Information : IOPS MiB/s Average min max 00:15:36.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8439.19 32.97 3794.49 649.98 7586.33 00:15:36.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4021.14 15.71 8010.14 6112.55 15414.46 00:15:36.376 ======================================================== 00:15:36.376 Total : 12460.32 48.67 5154.95 649.98 15414.46 00:15:36.376 00:15:36.376 12:40:08 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:36.376 12:40:08 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:38.905 Initializing NVMe Controllers 00:15:38.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:38.905 Controller IO queue size 128, less than required. 00:15:38.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:38.905 Controller IO queue size 128, less than required. 00:15:38.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:38.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:38.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:38.905 Initialization complete. Launching workers. 00:15:38.905 ======================================================== 00:15:38.905 Latency(us) 00:15:38.905 Device Information : IOPS MiB/s Average min max 00:15:38.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1517.80 379.45 85677.21 50966.75 162939.78 00:15:38.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 671.69 167.92 198795.09 65327.82 304068.81 00:15:38.905 ======================================================== 00:15:38.905 Total : 2189.50 547.37 120379.43 50966.75 304068.81 00:15:38.905 00:15:38.905 12:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:39.163 Initializing NVMe Controllers 00:15:39.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:39.163 Controller IO queue size 128, less than required. 00:15:39.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.163 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:39.163 Controller IO queue size 128, less than required. 00:15:39.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.163 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:39.163 WARNING: Some requested NVMe devices were skipped 00:15:39.163 No valid NVMe controllers or AIO or URING devices found 00:15:39.163 12:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:41.696 Initializing NVMe Controllers 00:15:41.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.696 Controller IO queue size 128, less than required. 00:15:41.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.696 Controller IO queue size 128, less than required. 00:15:41.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:41.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:41.696 Initialization complete. Launching workers. 00:15:41.696 00:15:41.696 ==================== 00:15:41.696 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:41.696 TCP transport: 00:15:41.696 polls: 8189 00:15:41.696 idle_polls: 5106 00:15:41.696 sock_completions: 3083 00:15:41.696 nvme_completions: 5091 00:15:41.696 submitted_requests: 7612 00:15:41.696 queued_requests: 1 00:15:41.696 00:15:41.696 ==================== 00:15:41.696 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:41.696 TCP transport: 00:15:41.696 polls: 8319 00:15:41.696 idle_polls: 4743 00:15:41.696 sock_completions: 3576 00:15:41.696 nvme_completions: 5633 00:15:41.696 submitted_requests: 8480 00:15:41.696 queued_requests: 1 00:15:41.696 ======================================================== 00:15:41.696 Latency(us) 00:15:41.696 Device Information : IOPS MiB/s Average min max 00:15:41.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1271.22 317.81 103282.95 54718.83 159295.38 00:15:41.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1406.59 351.65 90702.24 42333.19 165806.31 00:15:41.696 ======================================================== 00:15:41.696 Total : 2677.81 669.45 96674.61 42333.19 165806.31 00:15:41.696 00:15:41.696 12:40:14 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:41.696 12:40:14 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.955 rmmod nvme_tcp 00:15:41.955 rmmod nvme_fabrics 00:15:41.955 rmmod nvme_keyring 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75100 ']' 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75100 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75100 ']' 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75100 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75100 00:15:41.955 killing process with pid 75100 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75100' 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75100 00:15:41.955 12:40:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75100 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:42.892 ************************************ 00:15:42.892 END TEST nvmf_perf 00:15:42.892 ************************************ 00:15:42.892 00:15:42.892 real 0m14.490s 00:15:42.892 user 0m53.058s 00:15:42.892 sys 0m4.184s 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:42.892 12:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:42.892 12:40:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:42.892 12:40:15 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:42.892 12:40:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:42.892 12:40:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.892 12:40:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:42.892 ************************************ 00:15:42.892 START TEST nvmf_fio_host 00:15:42.892 ************************************ 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:42.892 * Looking for test storage... 00:15:42.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:42.892 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:42.893 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:43.151 Cannot find device "nvmf_tgt_br" 00:15:43.151 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:43.151 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.151 Cannot find device "nvmf_tgt_br2" 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:43.152 Cannot find device "nvmf_tgt_br" 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:43.152 Cannot find device "nvmf_tgt_br2" 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.152 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:43.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:43.411 00:15:43.411 --- 10.0.0.2 ping statistics --- 00:15:43.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.411 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:43.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:43.411 00:15:43.411 --- 10.0.0.3 ping statistics --- 00:15:43.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.411 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:43.411 00:15:43.411 --- 10.0.0.1 ping statistics --- 00:15:43.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.411 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75511 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75511 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75511 ']' 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.411 12:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.411 [2024-07-15 12:40:15.949125] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:43.412 [2024-07-15 12:40:15.949440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.670 [2024-07-15 12:40:16.093776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.670 [2024-07-15 12:40:16.265697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.670 [2024-07-15 12:40:16.266111] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.670 [2024-07-15 12:40:16.266295] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.670 [2024-07-15 12:40:16.266606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.670 [2024-07-15 12:40:16.266831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.670 [2024-07-15 12:40:16.267030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.670 [2024-07-15 12:40:16.267291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.670 [2024-07-15 12:40:16.267428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.670 [2024-07-15 12:40:16.267445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.670 [2024-07-15 12:40:16.347117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:44.605 12:40:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.605 12:40:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:15:44.605 12:40:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:44.605 [2024-07-15 12:40:17.190015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.605 12:40:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:44.605 12:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.605 12:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.605 12:40:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:44.863 Malloc1 00:15:44.863 12:40:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:45.121 12:40:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.379 12:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.638 [2024-07-15 12:40:18.291911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.638 12:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:45.897 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:46.156 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:46.156 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:46.156 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:46.156 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:46.156 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:46.156 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:46.156 12:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:46.156 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:46.156 fio-3.35 00:15:46.156 Starting 1 thread 00:15:48.704 00:15:48.704 test: (groupid=0, jobs=1): err= 0: pid=75594: Mon Jul 15 12:40:21 2024 00:15:48.704 read: IOPS=8873, BW=34.7MiB/s (36.3MB/s)(69.6MiB/2007msec) 00:15:48.704 slat (usec): min=2, max=290, avg= 2.45, stdev= 2.80 00:15:48.704 clat (usec): min=2171, max=13644, avg=7510.11, stdev=567.50 00:15:48.704 lat (usec): min=2213, max=13646, avg=7512.56, stdev=567.27 00:15:48.704 clat percentiles (usec): 00:15:48.704 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7111], 00:15:48.704 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:15:48.704 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8094], 95.00th=[ 8291], 00:15:48.704 | 99.00th=[ 8848], 99.50th=[ 9372], 99.90th=[11600], 99.95th=[12780], 00:15:48.704 | 99.99th=[13566] 00:15:48.704 bw ( KiB/s): min=33604, max=36216, per=99.91%, avg=35463.00, stdev=1245.81, samples=4 00:15:48.704 iops : min= 8401, max= 9054, avg=8865.75, stdev=311.45, samples=4 00:15:48.704 write: IOPS=8885, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2007msec); 0 zone resets 00:15:48.704 slat (usec): min=2, max=203, avg= 2.54, stdev= 1.81 00:15:48.704 clat (usec): min=2016, max=13006, avg=6851.82, stdev=532.70 00:15:48.704 lat (usec): min=2027, max=13008, avg=6854.36, stdev=532.59 00:15:48.704 clat percentiles (usec): 00:15:48.704 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:15:48.704 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6915], 00:15:48.704 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7570], 00:15:48.704 | 99.00th=[ 8225], 99.50th=[ 8979], 99.90th=[10945], 99.95th=[12649], 00:15:48.704 | 99.99th=[13042] 00:15:48.704 bw ( KiB/s): min=34499, max=36368, per=99.98%, avg=35536.75, stdev=835.97, samples=4 00:15:48.704 iops : min= 8624, max= 9092, avg=8884.00, stdev=209.30, samples=4 00:15:48.704 lat (msec) : 4=0.21%, 10=99.59%, 20=0.20% 00:15:48.704 cpu : usr=68.74%, sys=23.43%, ctx=19, majf=0, minf=7 00:15:48.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:48.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.704 issued rwts: total=17810,17834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.704 00:15:48.704 Run status group 0 (all jobs): 00:15:48.704 READ: bw=34.7MiB/s (36.3MB/s), 34.7MiB/s-34.7MiB/s (36.3MB/s-36.3MB/s), io=69.6MiB (72.9MB), run=2007-2007msec 00:15:48.704 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.0MB), run=2007-2007msec 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:48.704 12:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:48.704 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:48.704 fio-3.35 00:15:48.704 Starting 1 thread 00:15:51.231 00:15:51.231 test: (groupid=0, jobs=1): err= 0: pid=75639: Mon Jul 15 12:40:23 2024 00:15:51.231 read: IOPS=7621, BW=119MiB/s (125MB/s)(239MiB/2008msec) 00:15:51.231 slat (usec): min=3, max=118, avg= 3.83, stdev= 1.84 00:15:51.231 clat (usec): min=2825, max=22046, avg=9655.32, stdev=2877.09 00:15:51.231 lat (usec): min=2828, max=22050, avg=9659.15, stdev=2877.07 00:15:51.231 clat percentiles (usec): 00:15:51.231 | 1.00th=[ 4490], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 7177], 00:15:51.231 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10159], 00:15:51.231 | 70.00th=[11076], 80.00th=[11863], 90.00th=[13566], 95.00th=[14615], 00:15:51.231 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19268], 99.95th=[20841], 00:15:51.231 | 99.99th=[22152] 00:15:51.231 bw ( KiB/s): min=54112, max=72015, per=51.06%, avg=62267.75, stdev=8615.43, samples=4 00:15:51.231 iops : min= 3382, max= 4500, avg=3891.50, stdev=538.11, samples=4 00:15:51.231 write: IOPS=4516, BW=70.6MiB/s (74.0MB/s)(127MiB/1800msec); 0 zone resets 00:15:51.231 slat (usec): min=35, max=496, avg=38.49, stdev= 9.42 00:15:51.231 clat (usec): min=4722, max=21947, avg=12727.24, stdev=2460.21 00:15:51.231 lat (usec): min=4759, max=21985, avg=12765.73, stdev=2459.75 00:15:51.231 clat percentiles (usec): 00:15:51.231 | 1.00th=[ 8094], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10683], 00:15:51.231 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12387], 60.00th=[13173], 00:15:51.231 | 70.00th=[13960], 80.00th=[14877], 90.00th=[15926], 95.00th=[17171], 00:15:51.231 | 99.00th=[19268], 99.50th=[20317], 99.90th=[21365], 99.95th=[21627], 00:15:51.231 | 99.99th=[21890] 00:15:51.231 bw ( KiB/s): min=57600, max=74027, per=89.42%, avg=64618.75, stdev=8276.91, samples=4 00:15:51.231 iops : min= 3600, max= 4626, avg=4038.50, stdev=517.05, samples=4 00:15:51.231 lat (msec) : 4=0.18%, 10=41.39%, 20=58.15%, 50=0.27% 00:15:51.231 cpu : usr=80.32%, sys=15.50%, ctx=11, majf=0, minf=12 00:15:51.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:51.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.231 issued rwts: total=15304,8130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.231 00:15:51.231 Run status group 0 (all jobs): 00:15:51.231 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=239MiB (251MB), run=2008-2008msec 00:15:51.231 WRITE: bw=70.6MiB/s (74.0MB/s), 70.6MiB/s-70.6MiB/s (74.0MB/s-74.0MB/s), io=127MiB (133MB), run=1800-1800msec 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.231 rmmod nvme_tcp 00:15:51.231 rmmod nvme_fabrics 00:15:51.231 rmmod nvme_keyring 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75511 ']' 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75511 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75511 ']' 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75511 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75511 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75511' 00:15:51.231 killing process with pid 75511 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75511 00:15:51.231 12:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75511 00:15:51.489 12:40:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.489 12:40:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.489 12:40:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.489 12:40:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.489 12:40:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.489 12:40:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.489 12:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.489 12:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.748 12:40:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:51.748 00:15:51.748 real 0m8.788s 00:15:51.748 user 0m35.634s 00:15:51.748 sys 0m2.410s 00:15:51.748 12:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:51.748 ************************************ 00:15:51.748 END TEST nvmf_fio_host 00:15:51.748 ************************************ 00:15:51.748 12:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 12:40:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:51.748 12:40:24 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:51.748 12:40:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:51.748 12:40:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.748 12:40:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 ************************************ 00:15:51.748 START TEST nvmf_failover 00:15:51.748 ************************************ 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:51.748 * Looking for test storage... 00:15:51.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.748 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:51.749 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:52.009 Cannot find device "nvmf_tgt_br" 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.009 Cannot find device "nvmf_tgt_br2" 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:52.009 Cannot find device "nvmf_tgt_br" 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:52.009 Cannot find device "nvmf_tgt_br2" 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:52.009 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:52.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:15:52.267 00:15:52.267 --- 10.0.0.2 ping statistics --- 00:15:52.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.267 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:52.267 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.267 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:15:52.267 00:15:52.267 --- 10.0.0.3 ping statistics --- 00:15:52.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.267 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:52.267 00:15:52.267 --- 10.0.0.1 ping statistics --- 00:15:52.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.267 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75851 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75851 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75851 ']' 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.267 12:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:52.267 [2024-07-15 12:40:24.835572] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:52.267 [2024-07-15 12:40:24.835688] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.526 [2024-07-15 12:40:24.981069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.526 [2024-07-15 12:40:25.115382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.526 [2024-07-15 12:40:25.115457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.526 [2024-07-15 12:40:25.115481] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.526 [2024-07-15 12:40:25.115499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.526 [2024-07-15 12:40:25.115514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.526 [2024-07-15 12:40:25.115697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.526 [2024-07-15 12:40:25.115832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.526 [2024-07-15 12:40:25.116156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.526 [2024-07-15 12:40:25.174162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:53.463 12:40:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.463 12:40:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:53.463 12:40:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.463 12:40:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:53.463 12:40:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:53.463 12:40:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.463 12:40:25 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:53.721 [2024-07-15 12:40:26.161442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.721 12:40:26 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:53.980 Malloc0 00:15:53.980 12:40:26 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:54.238 12:40:26 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.497 12:40:26 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.756 [2024-07-15 12:40:27.182920] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.756 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:54.756 [2024-07-15 12:40:27.422984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:55.015 [2024-07-15 12:40:27.655214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:55.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75909 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75909 /var/tmp/bdevperf.sock 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75909 ']' 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.015 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:56.402 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.402 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:56.402 12:40:28 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:56.402 NVMe0n1 00:15:56.402 12:40:29 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:56.970 00:15:56.970 12:40:29 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75932 00:15:56.970 12:40:29 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:56.970 12:40:29 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:57.906 12:40:30 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.165 12:40:30 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:01.445 12:40:33 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:01.445 00:16:01.445 12:40:33 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:01.703 12:40:34 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:04.984 12:40:37 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.984 [2024-07-15 12:40:37.504100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.984 12:40:37 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:05.919 12:40:38 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:06.178 12:40:38 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75932 00:16:12.742 0 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75909 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75909 ']' 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75909 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75909 00:16:12.742 killing process with pid 75909 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75909' 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75909 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75909 00:16:12.742 12:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:12.742 [2024-07-15 12:40:27.726870] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:12.742 [2024-07-15 12:40:27.727004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75909 ] 00:16:12.742 [2024-07-15 12:40:27.870670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.742 [2024-07-15 12:40:27.986828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.742 [2024-07-15 12:40:28.043933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:12.742 Running I/O for 15 seconds... 00:16:12.742 [2024-07-15 12:40:30.643218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.742 [2024-07-15 12:40:30.643293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.742 [2024-07-15 12:40:30.643341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.643983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.643997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.743 [2024-07-15 12:40:30.644746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.743 [2024-07-15 12:40:30.644768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.644785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.644799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.644815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.644830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.644846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.644860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.644877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.644891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.644907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.644929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.644951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.644966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.644982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.644997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.645968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.645983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.646000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.646014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.646030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.646044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.646060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.646075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.646091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.744 [2024-07-15 12:40:30.646106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.744 [2024-07-15 12:40:30.646123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.646904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.646942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.646973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.646994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.745 [2024-07-15 12:40:30.647383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.745 [2024-07-15 12:40:30.647414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.745 [2024-07-15 12:40:30.647429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d17c0 is same with the state(5) to be set 00:16:12.745 [2024-07-15 12:40:30.647447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.745 [2024-07-15 12:40:30.647458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.745 [2024-07-15 12:40:30.647470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63640 len:8 PRP1 0x0 PRP2 0x0 00:16:12.745 [2024-07-15 12:40:30.647489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:30.647549] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11d17c0 was disconnected and freed. reset controller. 00:16:12.746 [2024-07-15 12:40:30.647590] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:12.746 [2024-07-15 12:40:30.647660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.746 [2024-07-15 12:40:30.647682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:30.647699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.746 [2024-07-15 12:40:30.647713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:30.647741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.746 [2024-07-15 12:40:30.647759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:30.647774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.746 [2024-07-15 12:40:30.647788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:30.647802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:12.746 [2024-07-15 12:40:30.647864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1180570 (9): Bad file descriptor 00:16:12.746 [2024-07-15 12:40:30.651648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:12.746 [2024-07-15 12:40:30.688588] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.746 [2024-07-15 12:40:34.259628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.259711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.259756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.259803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.259821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.259836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.259852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.259866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.259882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.259897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.259913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.259927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.259943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.259957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.259973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.259987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.746 [2024-07-15 12:40:34.260271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.746 [2024-07-15 12:40:34.260303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.746 [2024-07-15 12:40:34.260334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.746 [2024-07-15 12:40:34.260376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.746 [2024-07-15 12:40:34.260407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.746 [2024-07-15 12:40:34.260438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.746 [2024-07-15 12:40:34.260469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.746 [2024-07-15 12:40:34.260500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.746 [2024-07-15 12:40:34.260739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.746 [2024-07-15 12:40:34.260757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.260772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.260788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.260802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.260818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.260833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.260849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.260863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.260879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.260893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.260910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.260924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.260940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.260955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.260971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.260985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.747 [2024-07-15 12:40:34.261524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.747 [2024-07-15 12:40:34.261957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.747 [2024-07-15 12:40:34.261973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.261987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.262569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.262976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.262991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.263006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.263021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.263037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.263061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.263078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.748 [2024-07-15 12:40:34.263092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.263108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.263122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.263138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.263152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.263168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.263182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.263198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.263212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.263228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.263242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.748 [2024-07-15 12:40:34.263258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.748 [2024-07-15 12:40:34.263272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.749 [2024-07-15 12:40:34.263309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202d30 is same with the state(5) to be set 00:16:12.749 [2024-07-15 12:40:34.263342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76800 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77320 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77328 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77336 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77344 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77352 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77360 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77368 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77376 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77384 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.263951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.263965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.263979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.263989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.264000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.264013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.264027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.264037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.264048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.264062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.264076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.264086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.264096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.264110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.264124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.264134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.264145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77432 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.264158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.264173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.749 [2024-07-15 12:40:34.264183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.749 [2024-07-15 12:40:34.264194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:16:12.749 [2024-07-15 12:40:34.264213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.264274] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1202d30 was disconnected and freed. reset controller. 00:16:12.749 [2024-07-15 12:40:34.264292] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:12.749 [2024-07-15 12:40:34.264373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.749 [2024-07-15 12:40:34.264396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.264411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.749 [2024-07-15 12:40:34.264425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.264440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.749 [2024-07-15 12:40:34.264454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.264468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.749 [2024-07-15 12:40:34.264482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:34.264496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:12.749 [2024-07-15 12:40:34.264547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1180570 (9): Bad file descriptor 00:16:12.749 [2024-07-15 12:40:34.268379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:12.749 [2024-07-15 12:40:34.303952] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.749 [2024-07-15 12:40:38.754987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.749 [2024-07-15 12:40:38.755058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:38.755081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.749 [2024-07-15 12:40:38.755096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:38.755111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.749 [2024-07-15 12:40:38.755125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:38.755140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.749 [2024-07-15 12:40:38.755155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.749 [2024-07-15 12:40:38.755170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1180570 is same with the state(5) to be set 00:16:12.750 [2024-07-15 12:40:38.755238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.755262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.755304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.755363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.755396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.755428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.755458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.755489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.755520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.755972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.755986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.756017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.756047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.750 [2024-07-15 12:40:38.756562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.756601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.750 [2024-07-15 12:40:38.756618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.750 [2024-07-15 12:40:38.756634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.756665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.756695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.756739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.756774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.756805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.756836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.756867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.756898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.756928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.756959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.756975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.756990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.751 [2024-07-15 12:40:38.757607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.757638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.757669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.757700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.757742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.757775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.751 [2024-07-15 12:40:38.757807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.751 [2024-07-15 12:40:38.757824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.757876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.757896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.757911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.757927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.757942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.757959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.757974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.757990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.752 [2024-07-15 12:40:38.758950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.758981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.758997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.752 [2024-07-15 12:40:38.759012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.752 [2024-07-15 12:40:38.759028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.753 [2024-07-15 12:40:38.759042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.753 [2024-07-15 12:40:38.759073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.753 [2024-07-15 12:40:38.759111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.753 [2024-07-15 12:40:38.759142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.753 [2024-07-15 12:40:38.759177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.753 [2024-07-15 12:40:38.759208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.753 [2024-07-15 12:40:38.759239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.753 [2024-07-15 12:40:38.759270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.753 [2024-07-15 12:40:38.759300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.753 [2024-07-15 12:40:38.759331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.753 [2024-07-15 12:40:38.759362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.753 [2024-07-15 12:40:38.759398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.753 [2024-07-15 12:40:38.759429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.753 [2024-07-15 12:40:38.759486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.753 [2024-07-15 12:40:38.759498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23272 len:8 PRP1 0x0 PRP2 0x0 00:16:12.753 [2024-07-15 12:40:38.759513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.753 [2024-07-15 12:40:38.759582] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1201dd0 was disconnected and freed. reset controller. 00:16:12.753 [2024-07-15 12:40:38.759602] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:12.753 [2024-07-15 12:40:38.759619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:12.753 [2024-07-15 12:40:38.763469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:12.753 [2024-07-15 12:40:38.763513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1180570 (9): Bad file descriptor 00:16:12.753 [2024-07-15 12:40:38.798117] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.753 00:16:12.753 Latency(us) 00:16:12.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.753 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:12.753 Verification LBA range: start 0x0 length 0x4000 00:16:12.753 NVMe0n1 : 15.01 8790.44 34.34 215.13 0.00 14180.62 636.74 17396.83 00:16:12.753 =================================================================================================================== 00:16:12.753 Total : 8790.44 34.34 215.13 0.00 14180.62 636.74 17396.83 00:16:12.753 Received shutdown signal, test time was about 15.000000 seconds 00:16:12.753 00:16:12.753 Latency(us) 00:16:12.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.753 =================================================================================================================== 00:16:12.753 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:12.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76105 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76105 /var/tmp/bdevperf.sock 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76105 ']' 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.753 12:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:13.321 12:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.321 12:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:13.321 12:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:13.579 [2024-07-15 12:40:46.114388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:13.579 12:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:13.838 [2024-07-15 12:40:46.342777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:13.838 12:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:14.097 NVMe0n1 00:16:14.097 12:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:14.356 00:16:14.356 12:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:14.922 00:16:14.922 12:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:14.922 12:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:14.922 12:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:15.181 12:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:18.469 12:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:18.469 12:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:18.469 12:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76188 00:16:18.469 12:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:18.469 12:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76188 00:16:19.869 0 00:16:19.869 12:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:19.869 [2024-07-15 12:40:44.869210] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:19.869 [2024-07-15 12:40:44.869313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76105 ] 00:16:19.869 [2024-07-15 12:40:45.009500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.869 [2024-07-15 12:40:45.127712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.869 [2024-07-15 12:40:45.182256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:19.869 [2024-07-15 12:40:47.807737] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:19.869 [2024-07-15 12:40:47.807868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.869 [2024-07-15 12:40:47.807894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.869 [2024-07-15 12:40:47.807914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.869 [2024-07-15 12:40:47.807928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.869 [2024-07-15 12:40:47.807944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.869 [2024-07-15 12:40:47.807958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.869 [2024-07-15 12:40:47.807972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.869 [2024-07-15 12:40:47.807986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.869 [2024-07-15 12:40:47.808001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:19.869 [2024-07-15 12:40:47.808056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:19.869 [2024-07-15 12:40:47.808092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157d570 (9): Bad file descriptor 00:16:19.869 [2024-07-15 12:40:47.812637] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:19.869 Running I/O for 1 seconds... 00:16:19.869 00:16:19.869 Latency(us) 00:16:19.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.869 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:19.869 Verification LBA range: start 0x0 length 0x4000 00:16:19.870 NVMe0n1 : 1.01 7453.17 29.11 0.00 0.00 17075.63 1325.61 17039.36 00:16:19.870 =================================================================================================================== 00:16:19.870 Total : 7453.17 29.11 0.00 0.00 17075.63 1325.61 17039.36 00:16:19.870 12:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:19.870 12:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:19.870 12:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:20.131 12:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:20.131 12:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:20.390 12:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:20.649 12:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:23.932 12:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:23.932 12:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:23.932 12:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76105 00:16:23.932 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76105 ']' 00:16:23.932 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76105 00:16:23.932 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:23.932 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.932 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76105 00:16:24.191 killing process with pid 76105 00:16:24.191 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:24.191 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:24.191 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76105' 00:16:24.191 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76105 00:16:24.191 12:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76105 00:16:24.191 12:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:24.450 12:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.450 12:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:24.450 12:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:24.450 12:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:24.450 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.450 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:24.450 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.450 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:24.450 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.450 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.450 rmmod nvme_tcp 00:16:24.708 rmmod nvme_fabrics 00:16:24.708 rmmod nvme_keyring 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75851 ']' 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75851 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75851 ']' 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75851 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75851 00:16:24.708 killing process with pid 75851 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75851' 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75851 00:16:24.708 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75851 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:24.967 00:16:24.967 real 0m33.212s 00:16:24.967 user 2m8.312s 00:16:24.967 sys 0m5.916s 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.967 12:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:24.967 ************************************ 00:16:24.967 END TEST nvmf_failover 00:16:24.967 ************************************ 00:16:24.967 12:40:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:24.967 12:40:57 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:24.967 12:40:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:24.967 12:40:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.967 12:40:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.967 ************************************ 00:16:24.967 START TEST nvmf_host_discovery 00:16:24.967 ************************************ 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:24.967 * Looking for test storage... 00:16:24.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.967 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.968 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.968 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:25.227 Cannot find device "nvmf_tgt_br" 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.227 Cannot find device "nvmf_tgt_br2" 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:25.227 Cannot find device "nvmf_tgt_br" 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:25.227 Cannot find device "nvmf_tgt_br2" 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.227 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:25.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:25.487 00:16:25.487 --- 10.0.0.2 ping statistics --- 00:16:25.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.487 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:25.487 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.487 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:25.487 00:16:25.487 --- 10.0.0.3 ping statistics --- 00:16:25.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.487 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:25.487 00:16:25.487 --- 10.0.0.1 ping statistics --- 00:16:25.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.487 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76452 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76452 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76452 ']' 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.487 12:40:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.487 [2024-07-15 12:40:58.043557] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:25.487 [2024-07-15 12:40:58.043667] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.746 [2024-07-15 12:40:58.183135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.746 [2024-07-15 12:40:58.297199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.746 [2024-07-15 12:40:58.297263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.746 [2024-07-15 12:40:58.297274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.746 [2024-07-15 12:40:58.297282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.746 [2024-07-15 12:40:58.297290] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.746 [2024-07-15 12:40:58.297313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.746 [2024-07-15 12:40:58.350089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.687 [2024-07-15 12:40:59.103565] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.687 [2024-07-15 12:40:59.111637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.687 null0 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.687 null1 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76484 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76484 /tmp/host.sock 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76484 ']' 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:26.687 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.687 12:40:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.687 [2024-07-15 12:40:59.191994] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:26.687 [2024-07-15 12:40:59.192076] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76484 ] 00:16:26.687 [2024-07-15 12:40:59.331648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.946 [2024-07-15 12:40:59.460891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.946 [2024-07-15 12:40:59.530878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:27.882 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.882 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:27.882 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:27.883 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.142 [2024-07-15 12:41:00.644136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:28.142 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.143 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:16:28.402 12:41:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:16:28.661 [2024-07-15 12:41:01.286647] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:28.661 [2024-07-15 12:41:01.286679] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:28.661 [2024-07-15 12:41:01.286698] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:28.661 [2024-07-15 12:41:01.292705] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:28.921 [2024-07-15 12:41:01.350366] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:28.921 [2024-07-15 12:41:01.350567] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.489 12:41:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.489 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:29.489 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.489 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:29.489 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:29.489 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.489 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.489 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:29.489 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:29.489 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.490 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.749 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:29.749 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.749 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 [2024-07-15 12:41:02.233884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:29.750 [2024-07-15 12:41:02.234772] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:29.750 [2024-07-15 12:41:02.234813] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:29.750 [2024-07-15 12:41:02.240764] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.750 [2024-07-15 12:41:02.300040] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:29.750 [2024-07-15 12:41:02.300226] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:29.750 [2024-07-15 12:41:02.300452] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.010 [2024-07-15 12:41:02.454992] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:30.010 [2024-07-15 12:41:02.455033] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:30.010 [2024-07-15 12:41:02.457451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.010 [2024-07-15 12:41:02.457496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.010 [2024-07-15 12:41:02.457511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.010 [2024-07-15 12:41:02.457522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.010 [2024-07-15 12:41:02.457533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.010 [2024-07-15 12:41:02.457543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.010 [2024-07-15 12:41:02.457553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.010 [2024-07-15 12:41:02.457563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.010 [2024-07-15 12:41:02.457572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec600 is same with the state(5) to be set 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:30.010 [2024-07-15 12:41:02.460981] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:30.010 [2024-07-15 12:41:02.461013] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:30.010 [2024-07-15 12:41:02.461077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ec600 (9): Bad file descriptor 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.010 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.269 12:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.205 [2024-07-15 12:41:03.847230] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:31.205 [2024-07-15 12:41:03.847270] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:31.205 [2024-07-15 12:41:03.847290] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:31.205 [2024-07-15 12:41:03.853268] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:31.464 [2024-07-15 12:41:03.914084] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:31.464 [2024-07-15 12:41:03.914384] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.464 request: 00:16:31.464 { 00:16:31.464 "name": "nvme", 00:16:31.464 "trtype": "tcp", 00:16:31.464 "traddr": "10.0.0.2", 00:16:31.464 "adrfam": "ipv4", 00:16:31.464 "trsvcid": "8009", 00:16:31.464 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:31.464 "wait_for_attach": true, 00:16:31.464 "method": "bdev_nvme_start_discovery", 00:16:31.464 "req_id": 1 00:16:31.464 } 00:16:31.464 Got JSON-RPC error response 00:16:31.464 response: 00:16:31.464 { 00:16:31.464 "code": -17, 00:16:31.464 "message": "File exists" 00:16:31.464 } 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.464 12:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.464 request: 00:16:31.464 { 00:16:31.464 "name": "nvme_second", 00:16:31.464 "trtype": "tcp", 00:16:31.464 "traddr": "10.0.0.2", 00:16:31.464 "adrfam": "ipv4", 00:16:31.464 "trsvcid": "8009", 00:16:31.464 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:31.464 "wait_for_attach": true, 00:16:31.464 "method": "bdev_nvme_start_discovery", 00:16:31.464 "req_id": 1 00:16:31.464 } 00:16:31.464 Got JSON-RPC error response 00:16:31.464 response: 00:16:31.464 { 00:16:31.464 "code": -17, 00:16:31.464 "message": "File exists" 00:16:31.464 } 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:31.464 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.465 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.465 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:31.465 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:31.465 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.465 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.723 12:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.658 [2024-07-15 12:41:05.182898] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:32.658 [2024-07-15 12:41:05.182996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2105f20 with addr=10.0.0.2, port=8010 00:16:32.658 [2024-07-15 12:41:05.183024] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:32.658 [2024-07-15 12:41:05.183036] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:32.658 [2024-07-15 12:41:05.183046] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:33.592 [2024-07-15 12:41:06.182931] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:33.592 [2024-07-15 12:41:06.183014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2105f20 with addr=10.0.0.2, port=8010 00:16:33.592 [2024-07-15 12:41:06.183041] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:33.592 [2024-07-15 12:41:06.183052] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:33.592 [2024-07-15 12:41:06.183063] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:34.527 [2024-07-15 12:41:07.182737] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:34.527 request: 00:16:34.527 { 00:16:34.527 "name": "nvme_second", 00:16:34.527 "trtype": "tcp", 00:16:34.527 "traddr": "10.0.0.2", 00:16:34.527 "adrfam": "ipv4", 00:16:34.527 "trsvcid": "8010", 00:16:34.527 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:34.527 "wait_for_attach": false, 00:16:34.527 "attach_timeout_ms": 3000, 00:16:34.527 "method": "bdev_nvme_start_discovery", 00:16:34.527 "req_id": 1 00:16:34.527 } 00:16:34.527 Got JSON-RPC error response 00:16:34.527 response: 00:16:34.527 { 00:16:34.527 "code": -110, 00:16:34.527 "message": "Connection timed out" 00:16:34.527 } 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:34.527 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76484 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.786 rmmod nvme_tcp 00:16:34.786 rmmod nvme_fabrics 00:16:34.786 rmmod nvme_keyring 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76452 ']' 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76452 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76452 ']' 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76452 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76452 00:16:34.786 killing process with pid 76452 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76452' 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76452 00:16:34.786 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76452 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:35.044 00:16:35.044 real 0m10.118s 00:16:35.044 user 0m19.616s 00:16:35.044 sys 0m2.012s 00:16:35.044 ************************************ 00:16:35.044 END TEST nvmf_host_discovery 00:16:35.044 ************************************ 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:35.044 12:41:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:35.044 12:41:07 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:35.044 12:41:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:35.044 12:41:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:35.044 12:41:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:35.044 ************************************ 00:16:35.044 START TEST nvmf_host_multipath_status 00:16:35.044 ************************************ 00:16:35.044 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:35.301 * Looking for test storage... 00:16:35.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.301 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:35.302 Cannot find device "nvmf_tgt_br" 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.302 Cannot find device "nvmf_tgt_br2" 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:35.302 Cannot find device "nvmf_tgt_br" 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:35.302 Cannot find device "nvmf_tgt_br2" 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.302 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.559 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.559 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:35.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:16:35.559 00:16:35.559 --- 10.0.0.2 ping statistics --- 00:16:35.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.559 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:35.559 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.559 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:16:35.559 00:16:35.559 --- 10.0.0.3 ping statistics --- 00:16:35.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.559 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:35.559 00:16:35.559 --- 10.0.0.1 ping statistics --- 00:16:35.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.559 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.559 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76933 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76933 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76933 ']' 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.560 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:35.560 [2024-07-15 12:41:08.227255] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:35.560 [2024-07-15 12:41:08.227520] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.817 [2024-07-15 12:41:08.363057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.817 [2024-07-15 12:41:08.482316] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.817 [2024-07-15 12:41:08.482641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.817 [2024-07-15 12:41:08.482812] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.817 [2024-07-15 12:41:08.482942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.817 [2024-07-15 12:41:08.482985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.817 [2024-07-15 12:41:08.483202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.817 [2024-07-15 12:41:08.483213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.075 [2024-07-15 12:41:08.538497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:36.641 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.641 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:36.641 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.641 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.641 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:36.641 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.641 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76933 00:16:36.641 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.899 [2024-07-15 12:41:09.466637] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.899 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:37.156 Malloc0 00:16:37.156 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:37.414 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.672 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.929 [2024-07-15 12:41:10.399069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.929 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:38.186 [2024-07-15 12:41:10.619258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:38.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76993 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76993 /var/tmp/bdevperf.sock 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76993 ']' 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.186 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:39.121 12:41:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.121 12:41:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:39.121 12:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:39.379 12:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:39.637 Nvme0n1 00:16:39.637 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:39.896 Nvme0n1 00:16:39.896 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:39.896 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:42.431 12:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:42.431 12:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:42.431 12:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:42.431 12:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:43.806 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:43.806 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:43.806 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.806 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:43.806 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.806 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:43.806 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:43.806 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.065 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:44.065 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:44.065 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.065 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:44.394 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.394 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:44.394 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.394 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:44.682 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.682 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:44.682 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.682 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:44.682 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.682 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:44.682 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:44.682 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.940 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.940 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:44.941 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:45.199 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:45.458 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:46.395 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:46.395 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:46.395 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.395 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:46.653 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.653 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:46.653 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.912 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:47.170 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.170 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:47.170 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.170 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:47.429 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.429 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:47.429 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.429 12:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:47.687 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.687 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:47.687 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.687 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:47.946 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.946 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:47.946 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:47.946 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.204 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.204 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:48.204 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:48.204 12:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:48.463 12:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:49.838 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:49.838 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:49.838 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.838 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:49.838 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.838 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:49.838 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.838 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:50.096 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:50.096 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:50.096 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.096 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:50.354 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.354 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:50.354 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.354 12:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:50.611 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.611 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:50.611 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.611 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:50.868 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.868 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:50.868 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:50.868 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.126 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.126 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:51.126 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:51.384 12:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:51.642 12:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:52.575 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:52.575 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:52.575 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.575 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:52.833 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.833 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:52.833 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:52.833 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.091 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:53.091 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:53.091 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.091 12:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:53.656 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.656 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:53.656 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.656 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:53.656 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.656 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:53.656 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:53.656 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.927 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.927 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:53.927 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:53.927 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.205 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:54.205 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:54.205 12:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:54.462 12:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:54.721 12:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:55.688 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:55.688 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:55.688 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.688 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:55.946 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:55.946 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:55.946 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:55.946 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.204 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:56.205 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:56.205 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:56.205 12:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.464 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.464 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:56.464 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.464 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:56.722 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.722 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:56.722 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.722 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:56.981 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:56.981 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:56.981 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.981 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:57.240 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:57.240 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:57.240 12:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:57.499 12:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:57.758 12:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:58.695 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:58.695 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:58.695 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.695 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:58.954 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:58.954 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:58.954 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:58.954 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.213 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.213 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:59.213 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.213 12:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:59.472 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.472 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:59.472 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.472 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:59.730 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.730 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:59.730 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.730 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:59.989 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:59.989 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:59.989 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.989 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:00.247 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.247 12:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:00.506 12:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:00.506 12:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:00.764 12:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:01.022 12:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:01.959 12:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:01.959 12:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:01.959 12:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.959 12:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:02.217 12:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.217 12:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:02.217 12:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:02.217 12:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.474 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.474 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:02.474 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.475 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:02.731 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.731 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:02.731 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.731 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:02.989 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.989 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:02.989 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.989 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:03.246 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.246 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:03.246 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.246 12:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:03.504 12:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.504 12:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:03.504 12:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:03.762 12:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:04.021 12:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:04.983 12:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:04.983 12:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:04.983 12:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:04.983 12:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.241 12:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:05.241 12:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:05.241 12:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.241 12:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:05.500 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.500 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:05.500 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.500 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:05.758 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.758 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:05.758 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.758 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:06.019 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.019 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:06.019 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.019 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:06.327 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.327 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:06.327 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.327 12:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:06.601 12:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.601 12:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:06.601 12:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:06.859 12:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:07.118 12:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:08.053 12:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:08.053 12:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:08.053 12:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.053 12:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:08.312 12:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.312 12:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:08.312 12:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:08.312 12:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.570 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.570 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:08.570 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.570 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:08.828 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.828 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:08.828 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.828 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:09.085 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.085 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:09.085 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.085 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:09.342 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.342 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:09.342 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:09.342 12:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.600 12:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.600 12:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:09.600 12:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:09.858 12:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:10.115 12:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:11.061 12:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:11.061 12:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:11.061 12:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.061 12:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:11.319 12:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.319 12:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:11.319 12:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:11.319 12:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.578 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:11.578 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:11.578 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:11.578 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.838 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.838 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:11.838 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.838 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:12.097 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.097 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:12.097 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.097 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:12.355 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.355 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:12.355 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:12.355 12:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76993 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76993 ']' 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76993 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76993 00:17:12.615 killing process with pid 76993 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76993' 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76993 00:17:12.615 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76993 00:17:12.877 Connection closed with partial response: 00:17:12.877 00:17:12.877 00:17:12.877 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76993 00:17:12.877 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:12.877 [2024-07-15 12:41:10.691642] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:12.877 [2024-07-15 12:41:10.691779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76993 ] 00:17:12.877 [2024-07-15 12:41:10.832977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.877 [2024-07-15 12:41:10.959647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.877 [2024-07-15 12:41:11.016521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:12.877 Running I/O for 90 seconds... 00:17:12.877 [2024-07-15 12:41:27.026707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.877 [2024-07-15 12:41:27.026800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.877 [2024-07-15 12:41:27.026877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.877 [2024-07-15 12:41:27.026899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.877 [2024-07-15 12:41:27.026922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.877 [2024-07-15 12:41:27.026938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:12.877 [2024-07-15 12:41:27.026961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.877 [2024-07-15 12:41:27.026976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:12.877 [2024-07-15 12:41:27.026999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.877 [2024-07-15 12:41:27.027014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:12.877 [2024-07-15 12:41:27.027037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.027649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.027687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.027737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.027792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.027834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.027975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.878 [2024-07-15 12:41:27.028387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.028468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.028513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.028554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.028595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.028635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.028676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.028717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.878 [2024-07-15 12:41:27.028787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:12.878 [2024-07-15 12:41:27.028812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.028829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.028854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.028870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.028894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.028910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.028935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.028951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.028976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.028993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.029160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.029361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.029413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.029975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.029994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.030036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.879 [2024-07-15 12:41:27.030701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.030762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.030804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:12.879 [2024-07-15 12:41:27.030828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.879 [2024-07-15 12:41:27.030845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.030869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.030895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.030921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.030937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.030962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.030978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.031018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.031059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.031723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.031778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.031819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.031860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.031901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.031952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.031976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.031994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.032036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.880 [2024-07-15 12:41:27.032078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.032119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.032160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.032200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.032240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.032281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.032321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.032362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.880 [2024-07-15 12:41:27.032403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:12.880 [2024-07-15 12:41:27.032451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:27.032469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:27.032493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:27.032510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:27.032534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:27.032550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:27.032574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:27.032591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:27.032615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:27.032631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:27.032656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:27.032672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:27.032696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:27.032722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:27.032760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:27.032778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.618475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.618571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.618611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.618648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.618715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.618769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.618808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.618844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.618880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.618915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.618952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.618973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.618988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.619023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.619061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.619252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.619292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.619341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.619381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.619418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.619455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.619493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.619545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.619581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.619619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.619655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.619691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.619728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.881 [2024-07-15 12:41:42.619780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.881 [2024-07-15 12:41:42.619804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.881 [2024-07-15 12:41:42.619829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.882 [2024-07-15 12:41:42.619852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.882 [2024-07-15 12:41:42.619868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:12.882 [2024-07-15 12:41:42.620589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.882 [2024-07-15 12:41:42.620619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:12.882 [2024-07-15 12:41:42.620656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.882 [2024-07-15 12:41:42.620673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:12.882 [2024-07-15 12:41:42.620697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.882 [2024-07-15 12:41:42.620713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:12.882 [2024-07-15 12:41:42.620736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.882 [2024-07-15 12:41:42.620769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:12.882 [2024-07-15 12:41:42.620794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.882 [2024-07-15 12:41:42.620811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:12.882 [2024-07-15 12:41:42.620848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.882 [2024-07-15 12:41:42.620864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:12.882 [2024-07-15 12:41:42.620901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.882 [2024-07-15 12:41:42.620916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.620937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.620952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.620973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.620988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.621710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.621838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.883 [2024-07-15 12:41:42.621854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.622565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.622591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.622617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.622634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.622656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.883 [2024-07-15 12:41:42.622671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:12.883 [2024-07-15 12:41:42.622692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.884 [2024-07-15 12:41:42.622708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.622729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-07-15 12:41:42.622772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.622796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-07-15 12:41:42.622812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.622832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-07-15 12:41:42.622847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.622869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-07-15 12:41:42.622884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.622905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.884 [2024-07-15 12:41:42.622920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.622940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.884 [2024-07-15 12:41:42.622956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.622977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.884 [2024-07-15 12:41:42.622992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.623013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.884 [2024-07-15 12:41:42.623028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.623049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-07-15 12:41:42.623064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.623085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-07-15 12:41:42.623100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.623137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-07-15 12:41:42.623153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.623175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.884 [2024-07-15 12:41:42.623192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.623213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.884 [2024-07-15 12:41:42.623237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.623260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.884 [2024-07-15 12:41:42.623277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:12.884 [2024-07-15 12:41:42.623299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.884 [2024-07-15 12:41:42.623315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:12.884 Received shutdown signal, test time was about 32.574500 seconds 00:17:12.884 00:17:12.884 Latency(us) 00:17:12.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.884 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:12.884 Verification LBA range: start 0x0 length 0x4000 00:17:12.884 Nvme0n1 : 32.57 7584.24 29.63 0.00 0.00 16844.43 1020.28 4026531.84 00:17:12.884 =================================================================================================================== 00:17:12.884 Total : 7584.24 29.63 0.00 0.00 16844.43 1020.28 4026531.84 00:17:12.884 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.143 rmmod nvme_tcp 00:17:13.143 rmmod nvme_fabrics 00:17:13.143 rmmod nvme_keyring 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76933 ']' 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76933 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76933 ']' 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76933 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76933 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76933' 00:17:13.143 killing process with pid 76933 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76933 00:17:13.143 12:41:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76933 00:17:13.403 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.403 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.403 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.403 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.403 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.403 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.403 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.403 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.662 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:13.662 00:17:13.662 real 0m38.392s 00:17:13.662 user 2m3.197s 00:17:13.662 sys 0m11.687s 00:17:13.662 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.662 ************************************ 00:17:13.662 END TEST nvmf_host_multipath_status 00:17:13.662 12:41:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 ************************************ 00:17:13.662 12:41:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:13.662 12:41:46 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:13.662 12:41:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:13.662 12:41:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.662 12:41:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 ************************************ 00:17:13.662 START TEST nvmf_discovery_remove_ifc 00:17:13.662 ************************************ 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:13.662 * Looking for test storage... 00:17:13.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.662 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:13.663 Cannot find device "nvmf_tgt_br" 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.663 Cannot find device "nvmf_tgt_br2" 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:13.663 Cannot find device "nvmf_tgt_br" 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:13.663 Cannot find device "nvmf_tgt_br2" 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:17:13.663 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:13.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:13.923 00:17:13.923 --- 10.0.0.2 ping statistics --- 00:17:13.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.923 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:13.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:13.923 00:17:13.923 --- 10.0.0.3 ping statistics --- 00:17:13.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.923 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:13.923 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:14.182 00:17:14.182 --- 10.0.0.1 ping statistics --- 00:17:14.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.182 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77773 00:17:14.182 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77773 00:17:14.183 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77773 ']' 00:17:14.183 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.183 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.183 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.183 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.183 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.183 12:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.183 [2024-07-15 12:41:46.697862] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:14.183 [2024-07-15 12:41:46.697960] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.183 [2024-07-15 12:41:46.839199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.441 [2024-07-15 12:41:46.991383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.441 [2024-07-15 12:41:46.991466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.441 [2024-07-15 12:41:46.991479] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.441 [2024-07-15 12:41:46.991489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.441 [2024-07-15 12:41:46.991497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.441 [2024-07-15 12:41:46.991540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.441 [2024-07-15 12:41:47.070067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.377 [2024-07-15 12:41:47.753847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.377 [2024-07-15 12:41:47.761957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:15.377 null0 00:17:15.377 [2024-07-15 12:41:47.793877] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77805 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77805 /tmp/host.sock 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77805 ']' 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.377 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.377 12:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.377 [2024-07-15 12:41:47.876324] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:15.377 [2024-07-15 12:41:47.876479] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77805 ] 00:17:15.377 [2024-07-15 12:41:48.022790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.636 [2024-07-15 12:41:48.169643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.204 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.462 [2024-07-15 12:41:48.909387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:16.462 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.462 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:16.462 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.462 12:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.396 [2024-07-15 12:41:49.966981] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:17.396 [2024-07-15 12:41:49.967025] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:17.396 [2024-07-15 12:41:49.967060] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:17.396 [2024-07-15 12:41:49.973027] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:17.396 [2024-07-15 12:41:50.030318] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:17.396 [2024-07-15 12:41:50.030389] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:17.396 [2024-07-15 12:41:50.030420] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:17.396 [2024-07-15 12:41:50.030443] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:17.396 [2024-07-15 12:41:50.030472] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.396 [2024-07-15 12:41:50.035630] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c12de0 was disconnected and freed. delete nvme_qpair. 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:17.396 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:17.655 12:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:18.589 12:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:19.966 12:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:20.902 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:20.902 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:20.902 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.902 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:20.902 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:20.903 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:20.903 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:20.903 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.903 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:20.903 12:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:21.839 12:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:22.773 12:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:23.032 [2024-07-15 12:41:55.458084] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:23.032 [2024-07-15 12:41:55.458153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.032 [2024-07-15 12:41:55.458170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.032 [2024-07-15 12:41:55.458185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.032 [2024-07-15 12:41:55.458195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.032 [2024-07-15 12:41:55.458205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.032 [2024-07-15 12:41:55.458215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.032 [2024-07-15 12:41:55.458226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.032 [2024-07-15 12:41:55.458236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.032 [2024-07-15 12:41:55.458247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.032 [2024-07-15 12:41:55.458256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.032 [2024-07-15 12:41:55.458266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b78ac0 is same with the state(5) to be set 00:17:23.032 [2024-07-15 12:41:55.468078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78ac0 (9): Bad file descriptor 00:17:23.032 [2024-07-15 12:41:55.478097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:23.971 [2024-07-15 12:41:56.484849] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:23.971 [2024-07-15 12:41:56.484969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b78ac0 with addr=10.0.0.2, port=4420 00:17:23.971 [2024-07-15 12:41:56.485006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b78ac0 is same with the state(5) to be set 00:17:23.971 [2024-07-15 12:41:56.485076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78ac0 (9): Bad file descriptor 00:17:23.971 [2024-07-15 12:41:56.485985] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:23.971 [2024-07-15 12:41:56.486041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:23.971 [2024-07-15 12:41:56.486065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:23.971 [2024-07-15 12:41:56.486087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:23.971 [2024-07-15 12:41:56.486132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:23.971 [2024-07-15 12:41:56.486158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:23.971 12:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:24.904 [2024-07-15 12:41:57.486231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:24.904 [2024-07-15 12:41:57.486313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:24.904 [2024-07-15 12:41:57.486343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:24.904 [2024-07-15 12:41:57.486356] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:24.904 [2024-07-15 12:41:57.486382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:24.904 [2024-07-15 12:41:57.486415] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:24.904 [2024-07-15 12:41:57.486474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.904 [2024-07-15 12:41:57.486492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.904 [2024-07-15 12:41:57.486505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.904 [2024-07-15 12:41:57.486515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.904 [2024-07-15 12:41:57.486525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.904 [2024-07-15 12:41:57.486535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.904 [2024-07-15 12:41:57.486546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.904 [2024-07-15 12:41:57.486555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.904 [2024-07-15 12:41:57.486566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.904 [2024-07-15 12:41:57.486583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.904 [2024-07-15 12:41:57.486600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:24.904 [2024-07-15 12:41:57.487128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7c860 (9): Bad file descriptor 00:17:24.904 [2024-07-15 12:41:57.488139] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:24.904 [2024-07-15 12:41:57.488172] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:24.904 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:25.162 12:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:26.094 12:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:27.040 [2024-07-15 12:41:59.491440] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:27.040 [2024-07-15 12:41:59.491475] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:27.040 [2024-07-15 12:41:59.491511] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:27.040 [2024-07-15 12:41:59.497476] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:27.040 [2024-07-15 12:41:59.553803] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:27.040 [2024-07-15 12:41:59.553871] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:27.040 [2024-07-15 12:41:59.553898] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:27.040 [2024-07-15 12:41:59.553915] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:27.040 [2024-07-15 12:41:59.553924] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:27.040 [2024-07-15 12:41:59.560134] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c1fd90 was disconnected and freed. delete nvme_qpair. 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77805 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77805 ']' 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77805 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77805 00:17:27.300 killing process with pid 77805 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77805' 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77805 00:17:27.300 12:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77805 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:27.559 rmmod nvme_tcp 00:17:27.559 rmmod nvme_fabrics 00:17:27.559 rmmod nvme_keyring 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77773 ']' 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77773 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77773 ']' 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77773 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77773 00:17:27.559 killing process with pid 77773 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77773' 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77773 00:17:27.559 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77773 00:17:27.817 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.817 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:27.818 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:27.818 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.818 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.818 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.818 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.818 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.818 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:27.818 ************************************ 00:17:27.818 END TEST nvmf_discovery_remove_ifc 00:17:27.818 ************************************ 00:17:27.818 00:17:27.818 real 0m14.231s 00:17:27.818 user 0m24.670s 00:17:27.818 sys 0m2.461s 00:17:27.818 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.818 12:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:27.818 12:42:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:27.818 12:42:00 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:27.818 12:42:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:27.818 12:42:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.818 12:42:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.818 ************************************ 00:17:27.818 START TEST nvmf_identify_kernel_target 00:17:27.818 ************************************ 00:17:27.818 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:28.076 * Looking for test storage... 00:17:28.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:28.076 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:28.077 Cannot find device "nvmf_tgt_br" 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.077 Cannot find device "nvmf_tgt_br2" 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:28.077 Cannot find device "nvmf_tgt_br" 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:28.077 Cannot find device "nvmf_tgt_br2" 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.077 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.335 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.335 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:28.335 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:28.335 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:28.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:28.336 00:17:28.336 --- 10.0.0.2 ping statistics --- 00:17:28.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.336 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:28.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:28.336 00:17:28.336 --- 10.0.0.3 ping statistics --- 00:17:28.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.336 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:28.336 00:17:28.336 --- 10.0.0.1 ping statistics --- 00:17:28.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.336 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:28.336 12:42:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:28.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:28.905 Waiting for block devices as requested 00:17:28.905 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:28.905 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:28.905 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:28.905 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:28.905 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:28.905 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:28.905 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:28.905 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:28.905 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:28.905 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:28.905 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:29.164 No valid GPT data, bailing 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:29.164 No valid GPT data, bailing 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:29.164 No valid GPT data, bailing 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:29.164 No valid GPT data, bailing 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:29.164 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -a 10.0.0.1 -t tcp -s 4420 00:17:29.423 00:17:29.423 Discovery Log Number of Records 2, Generation counter 2 00:17:29.423 =====Discovery Log Entry 0====== 00:17:29.423 trtype: tcp 00:17:29.423 adrfam: ipv4 00:17:29.423 subtype: current discovery subsystem 00:17:29.423 treq: not specified, sq flow control disable supported 00:17:29.423 portid: 1 00:17:29.423 trsvcid: 4420 00:17:29.423 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:29.423 traddr: 10.0.0.1 00:17:29.423 eflags: none 00:17:29.423 sectype: none 00:17:29.423 =====Discovery Log Entry 1====== 00:17:29.423 trtype: tcp 00:17:29.423 adrfam: ipv4 00:17:29.423 subtype: nvme subsystem 00:17:29.423 treq: not specified, sq flow control disable supported 00:17:29.423 portid: 1 00:17:29.423 trsvcid: 4420 00:17:29.423 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:29.423 traddr: 10.0.0.1 00:17:29.423 eflags: none 00:17:29.423 sectype: none 00:17:29.423 12:42:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:29.423 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:29.423 ===================================================== 00:17:29.423 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:29.423 ===================================================== 00:17:29.423 Controller Capabilities/Features 00:17:29.423 ================================ 00:17:29.423 Vendor ID: 0000 00:17:29.423 Subsystem Vendor ID: 0000 00:17:29.423 Serial Number: b92f0d211b667ed1623c 00:17:29.423 Model Number: Linux 00:17:29.423 Firmware Version: 6.7.0-68 00:17:29.423 Recommended Arb Burst: 0 00:17:29.423 IEEE OUI Identifier: 00 00 00 00:17:29.423 Multi-path I/O 00:17:29.423 May have multiple subsystem ports: No 00:17:29.423 May have multiple controllers: No 00:17:29.423 Associated with SR-IOV VF: No 00:17:29.423 Max Data Transfer Size: Unlimited 00:17:29.423 Max Number of Namespaces: 0 00:17:29.423 Max Number of I/O Queues: 1024 00:17:29.423 NVMe Specification Version (VS): 1.3 00:17:29.423 NVMe Specification Version (Identify): 1.3 00:17:29.423 Maximum Queue Entries: 1024 00:17:29.423 Contiguous Queues Required: No 00:17:29.423 Arbitration Mechanisms Supported 00:17:29.423 Weighted Round Robin: Not Supported 00:17:29.423 Vendor Specific: Not Supported 00:17:29.423 Reset Timeout: 7500 ms 00:17:29.423 Doorbell Stride: 4 bytes 00:17:29.423 NVM Subsystem Reset: Not Supported 00:17:29.423 Command Sets Supported 00:17:29.423 NVM Command Set: Supported 00:17:29.423 Boot Partition: Not Supported 00:17:29.423 Memory Page Size Minimum: 4096 bytes 00:17:29.423 Memory Page Size Maximum: 4096 bytes 00:17:29.423 Persistent Memory Region: Not Supported 00:17:29.423 Optional Asynchronous Events Supported 00:17:29.423 Namespace Attribute Notices: Not Supported 00:17:29.423 Firmware Activation Notices: Not Supported 00:17:29.423 ANA Change Notices: Not Supported 00:17:29.423 PLE Aggregate Log Change Notices: Not Supported 00:17:29.423 LBA Status Info Alert Notices: Not Supported 00:17:29.423 EGE Aggregate Log Change Notices: Not Supported 00:17:29.423 Normal NVM Subsystem Shutdown event: Not Supported 00:17:29.423 Zone Descriptor Change Notices: Not Supported 00:17:29.423 Discovery Log Change Notices: Supported 00:17:29.423 Controller Attributes 00:17:29.423 128-bit Host Identifier: Not Supported 00:17:29.423 Non-Operational Permissive Mode: Not Supported 00:17:29.423 NVM Sets: Not Supported 00:17:29.423 Read Recovery Levels: Not Supported 00:17:29.423 Endurance Groups: Not Supported 00:17:29.423 Predictable Latency Mode: Not Supported 00:17:29.423 Traffic Based Keep ALive: Not Supported 00:17:29.423 Namespace Granularity: Not Supported 00:17:29.423 SQ Associations: Not Supported 00:17:29.423 UUID List: Not Supported 00:17:29.423 Multi-Domain Subsystem: Not Supported 00:17:29.423 Fixed Capacity Management: Not Supported 00:17:29.423 Variable Capacity Management: Not Supported 00:17:29.423 Delete Endurance Group: Not Supported 00:17:29.423 Delete NVM Set: Not Supported 00:17:29.423 Extended LBA Formats Supported: Not Supported 00:17:29.423 Flexible Data Placement Supported: Not Supported 00:17:29.423 00:17:29.423 Controller Memory Buffer Support 00:17:29.423 ================================ 00:17:29.423 Supported: No 00:17:29.423 00:17:29.423 Persistent Memory Region Support 00:17:29.423 ================================ 00:17:29.423 Supported: No 00:17:29.423 00:17:29.423 Admin Command Set Attributes 00:17:29.423 ============================ 00:17:29.423 Security Send/Receive: Not Supported 00:17:29.424 Format NVM: Not Supported 00:17:29.424 Firmware Activate/Download: Not Supported 00:17:29.424 Namespace Management: Not Supported 00:17:29.424 Device Self-Test: Not Supported 00:17:29.424 Directives: Not Supported 00:17:29.424 NVMe-MI: Not Supported 00:17:29.424 Virtualization Management: Not Supported 00:17:29.424 Doorbell Buffer Config: Not Supported 00:17:29.424 Get LBA Status Capability: Not Supported 00:17:29.424 Command & Feature Lockdown Capability: Not Supported 00:17:29.424 Abort Command Limit: 1 00:17:29.424 Async Event Request Limit: 1 00:17:29.424 Number of Firmware Slots: N/A 00:17:29.424 Firmware Slot 1 Read-Only: N/A 00:17:29.424 Firmware Activation Without Reset: N/A 00:17:29.424 Multiple Update Detection Support: N/A 00:17:29.424 Firmware Update Granularity: No Information Provided 00:17:29.424 Per-Namespace SMART Log: No 00:17:29.424 Asymmetric Namespace Access Log Page: Not Supported 00:17:29.424 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:29.424 Command Effects Log Page: Not Supported 00:17:29.424 Get Log Page Extended Data: Supported 00:17:29.424 Telemetry Log Pages: Not Supported 00:17:29.424 Persistent Event Log Pages: Not Supported 00:17:29.424 Supported Log Pages Log Page: May Support 00:17:29.424 Commands Supported & Effects Log Page: Not Supported 00:17:29.424 Feature Identifiers & Effects Log Page:May Support 00:17:29.424 NVMe-MI Commands & Effects Log Page: May Support 00:17:29.424 Data Area 4 for Telemetry Log: Not Supported 00:17:29.424 Error Log Page Entries Supported: 1 00:17:29.424 Keep Alive: Not Supported 00:17:29.424 00:17:29.424 NVM Command Set Attributes 00:17:29.424 ========================== 00:17:29.424 Submission Queue Entry Size 00:17:29.424 Max: 1 00:17:29.424 Min: 1 00:17:29.424 Completion Queue Entry Size 00:17:29.424 Max: 1 00:17:29.424 Min: 1 00:17:29.424 Number of Namespaces: 0 00:17:29.424 Compare Command: Not Supported 00:17:29.424 Write Uncorrectable Command: Not Supported 00:17:29.424 Dataset Management Command: Not Supported 00:17:29.424 Write Zeroes Command: Not Supported 00:17:29.424 Set Features Save Field: Not Supported 00:17:29.424 Reservations: Not Supported 00:17:29.424 Timestamp: Not Supported 00:17:29.424 Copy: Not Supported 00:17:29.424 Volatile Write Cache: Not Present 00:17:29.424 Atomic Write Unit (Normal): 1 00:17:29.424 Atomic Write Unit (PFail): 1 00:17:29.424 Atomic Compare & Write Unit: 1 00:17:29.424 Fused Compare & Write: Not Supported 00:17:29.424 Scatter-Gather List 00:17:29.424 SGL Command Set: Supported 00:17:29.424 SGL Keyed: Not Supported 00:17:29.424 SGL Bit Bucket Descriptor: Not Supported 00:17:29.424 SGL Metadata Pointer: Not Supported 00:17:29.424 Oversized SGL: Not Supported 00:17:29.424 SGL Metadata Address: Not Supported 00:17:29.424 SGL Offset: Supported 00:17:29.424 Transport SGL Data Block: Not Supported 00:17:29.424 Replay Protected Memory Block: Not Supported 00:17:29.424 00:17:29.424 Firmware Slot Information 00:17:29.424 ========================= 00:17:29.424 Active slot: 0 00:17:29.424 00:17:29.424 00:17:29.424 Error Log 00:17:29.424 ========= 00:17:29.424 00:17:29.424 Active Namespaces 00:17:29.424 ================= 00:17:29.424 Discovery Log Page 00:17:29.424 ================== 00:17:29.424 Generation Counter: 2 00:17:29.424 Number of Records: 2 00:17:29.424 Record Format: 0 00:17:29.424 00:17:29.424 Discovery Log Entry 0 00:17:29.424 ---------------------- 00:17:29.424 Transport Type: 3 (TCP) 00:17:29.424 Address Family: 1 (IPv4) 00:17:29.424 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:29.424 Entry Flags: 00:17:29.424 Duplicate Returned Information: 0 00:17:29.424 Explicit Persistent Connection Support for Discovery: 0 00:17:29.424 Transport Requirements: 00:17:29.424 Secure Channel: Not Specified 00:17:29.424 Port ID: 1 (0x0001) 00:17:29.424 Controller ID: 65535 (0xffff) 00:17:29.424 Admin Max SQ Size: 32 00:17:29.424 Transport Service Identifier: 4420 00:17:29.424 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:29.424 Transport Address: 10.0.0.1 00:17:29.424 Discovery Log Entry 1 00:17:29.424 ---------------------- 00:17:29.424 Transport Type: 3 (TCP) 00:17:29.424 Address Family: 1 (IPv4) 00:17:29.424 Subsystem Type: 2 (NVM Subsystem) 00:17:29.424 Entry Flags: 00:17:29.424 Duplicate Returned Information: 0 00:17:29.424 Explicit Persistent Connection Support for Discovery: 0 00:17:29.424 Transport Requirements: 00:17:29.424 Secure Channel: Not Specified 00:17:29.424 Port ID: 1 (0x0001) 00:17:29.424 Controller ID: 65535 (0xffff) 00:17:29.424 Admin Max SQ Size: 32 00:17:29.424 Transport Service Identifier: 4420 00:17:29.424 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:29.424 Transport Address: 10.0.0.1 00:17:29.424 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:29.695 get_feature(0x01) failed 00:17:29.695 get_feature(0x02) failed 00:17:29.695 get_feature(0x04) failed 00:17:29.695 ===================================================== 00:17:29.695 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:29.695 ===================================================== 00:17:29.695 Controller Capabilities/Features 00:17:29.695 ================================ 00:17:29.695 Vendor ID: 0000 00:17:29.695 Subsystem Vendor ID: 0000 00:17:29.695 Serial Number: 38c88de1e591286e0de7 00:17:29.695 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:29.695 Firmware Version: 6.7.0-68 00:17:29.695 Recommended Arb Burst: 6 00:17:29.695 IEEE OUI Identifier: 00 00 00 00:17:29.695 Multi-path I/O 00:17:29.695 May have multiple subsystem ports: Yes 00:17:29.695 May have multiple controllers: Yes 00:17:29.695 Associated with SR-IOV VF: No 00:17:29.695 Max Data Transfer Size: Unlimited 00:17:29.695 Max Number of Namespaces: 1024 00:17:29.695 Max Number of I/O Queues: 128 00:17:29.695 NVMe Specification Version (VS): 1.3 00:17:29.695 NVMe Specification Version (Identify): 1.3 00:17:29.695 Maximum Queue Entries: 1024 00:17:29.695 Contiguous Queues Required: No 00:17:29.695 Arbitration Mechanisms Supported 00:17:29.695 Weighted Round Robin: Not Supported 00:17:29.695 Vendor Specific: Not Supported 00:17:29.695 Reset Timeout: 7500 ms 00:17:29.695 Doorbell Stride: 4 bytes 00:17:29.695 NVM Subsystem Reset: Not Supported 00:17:29.695 Command Sets Supported 00:17:29.695 NVM Command Set: Supported 00:17:29.695 Boot Partition: Not Supported 00:17:29.695 Memory Page Size Minimum: 4096 bytes 00:17:29.695 Memory Page Size Maximum: 4096 bytes 00:17:29.695 Persistent Memory Region: Not Supported 00:17:29.695 Optional Asynchronous Events Supported 00:17:29.695 Namespace Attribute Notices: Supported 00:17:29.695 Firmware Activation Notices: Not Supported 00:17:29.695 ANA Change Notices: Supported 00:17:29.695 PLE Aggregate Log Change Notices: Not Supported 00:17:29.695 LBA Status Info Alert Notices: Not Supported 00:17:29.695 EGE Aggregate Log Change Notices: Not Supported 00:17:29.695 Normal NVM Subsystem Shutdown event: Not Supported 00:17:29.695 Zone Descriptor Change Notices: Not Supported 00:17:29.696 Discovery Log Change Notices: Not Supported 00:17:29.696 Controller Attributes 00:17:29.696 128-bit Host Identifier: Supported 00:17:29.696 Non-Operational Permissive Mode: Not Supported 00:17:29.696 NVM Sets: Not Supported 00:17:29.696 Read Recovery Levels: Not Supported 00:17:29.696 Endurance Groups: Not Supported 00:17:29.696 Predictable Latency Mode: Not Supported 00:17:29.696 Traffic Based Keep ALive: Supported 00:17:29.696 Namespace Granularity: Not Supported 00:17:29.696 SQ Associations: Not Supported 00:17:29.696 UUID List: Not Supported 00:17:29.696 Multi-Domain Subsystem: Not Supported 00:17:29.696 Fixed Capacity Management: Not Supported 00:17:29.696 Variable Capacity Management: Not Supported 00:17:29.696 Delete Endurance Group: Not Supported 00:17:29.696 Delete NVM Set: Not Supported 00:17:29.696 Extended LBA Formats Supported: Not Supported 00:17:29.696 Flexible Data Placement Supported: Not Supported 00:17:29.696 00:17:29.696 Controller Memory Buffer Support 00:17:29.696 ================================ 00:17:29.696 Supported: No 00:17:29.696 00:17:29.696 Persistent Memory Region Support 00:17:29.696 ================================ 00:17:29.696 Supported: No 00:17:29.696 00:17:29.696 Admin Command Set Attributes 00:17:29.696 ============================ 00:17:29.696 Security Send/Receive: Not Supported 00:17:29.696 Format NVM: Not Supported 00:17:29.696 Firmware Activate/Download: Not Supported 00:17:29.696 Namespace Management: Not Supported 00:17:29.696 Device Self-Test: Not Supported 00:17:29.696 Directives: Not Supported 00:17:29.696 NVMe-MI: Not Supported 00:17:29.696 Virtualization Management: Not Supported 00:17:29.696 Doorbell Buffer Config: Not Supported 00:17:29.696 Get LBA Status Capability: Not Supported 00:17:29.696 Command & Feature Lockdown Capability: Not Supported 00:17:29.696 Abort Command Limit: 4 00:17:29.696 Async Event Request Limit: 4 00:17:29.696 Number of Firmware Slots: N/A 00:17:29.696 Firmware Slot 1 Read-Only: N/A 00:17:29.696 Firmware Activation Without Reset: N/A 00:17:29.696 Multiple Update Detection Support: N/A 00:17:29.696 Firmware Update Granularity: No Information Provided 00:17:29.696 Per-Namespace SMART Log: Yes 00:17:29.696 Asymmetric Namespace Access Log Page: Supported 00:17:29.696 ANA Transition Time : 10 sec 00:17:29.696 00:17:29.696 Asymmetric Namespace Access Capabilities 00:17:29.696 ANA Optimized State : Supported 00:17:29.696 ANA Non-Optimized State : Supported 00:17:29.696 ANA Inaccessible State : Supported 00:17:29.696 ANA Persistent Loss State : Supported 00:17:29.696 ANA Change State : Supported 00:17:29.696 ANAGRPID is not changed : No 00:17:29.696 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:29.696 00:17:29.696 ANA Group Identifier Maximum : 128 00:17:29.696 Number of ANA Group Identifiers : 128 00:17:29.696 Max Number of Allowed Namespaces : 1024 00:17:29.696 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:29.696 Command Effects Log Page: Supported 00:17:29.696 Get Log Page Extended Data: Supported 00:17:29.696 Telemetry Log Pages: Not Supported 00:17:29.696 Persistent Event Log Pages: Not Supported 00:17:29.696 Supported Log Pages Log Page: May Support 00:17:29.696 Commands Supported & Effects Log Page: Not Supported 00:17:29.696 Feature Identifiers & Effects Log Page:May Support 00:17:29.696 NVMe-MI Commands & Effects Log Page: May Support 00:17:29.696 Data Area 4 for Telemetry Log: Not Supported 00:17:29.696 Error Log Page Entries Supported: 128 00:17:29.696 Keep Alive: Supported 00:17:29.696 Keep Alive Granularity: 1000 ms 00:17:29.696 00:17:29.696 NVM Command Set Attributes 00:17:29.696 ========================== 00:17:29.696 Submission Queue Entry Size 00:17:29.696 Max: 64 00:17:29.696 Min: 64 00:17:29.696 Completion Queue Entry Size 00:17:29.696 Max: 16 00:17:29.696 Min: 16 00:17:29.696 Number of Namespaces: 1024 00:17:29.696 Compare Command: Not Supported 00:17:29.696 Write Uncorrectable Command: Not Supported 00:17:29.696 Dataset Management Command: Supported 00:17:29.696 Write Zeroes Command: Supported 00:17:29.696 Set Features Save Field: Not Supported 00:17:29.696 Reservations: Not Supported 00:17:29.696 Timestamp: Not Supported 00:17:29.696 Copy: Not Supported 00:17:29.696 Volatile Write Cache: Present 00:17:29.696 Atomic Write Unit (Normal): 1 00:17:29.696 Atomic Write Unit (PFail): 1 00:17:29.696 Atomic Compare & Write Unit: 1 00:17:29.696 Fused Compare & Write: Not Supported 00:17:29.696 Scatter-Gather List 00:17:29.696 SGL Command Set: Supported 00:17:29.696 SGL Keyed: Not Supported 00:17:29.696 SGL Bit Bucket Descriptor: Not Supported 00:17:29.696 SGL Metadata Pointer: Not Supported 00:17:29.696 Oversized SGL: Not Supported 00:17:29.696 SGL Metadata Address: Not Supported 00:17:29.696 SGL Offset: Supported 00:17:29.696 Transport SGL Data Block: Not Supported 00:17:29.696 Replay Protected Memory Block: Not Supported 00:17:29.696 00:17:29.696 Firmware Slot Information 00:17:29.696 ========================= 00:17:29.696 Active slot: 0 00:17:29.696 00:17:29.696 Asymmetric Namespace Access 00:17:29.696 =========================== 00:17:29.696 Change Count : 0 00:17:29.696 Number of ANA Group Descriptors : 1 00:17:29.696 ANA Group Descriptor : 0 00:17:29.696 ANA Group ID : 1 00:17:29.696 Number of NSID Values : 1 00:17:29.696 Change Count : 0 00:17:29.696 ANA State : 1 00:17:29.696 Namespace Identifier : 1 00:17:29.696 00:17:29.696 Commands Supported and Effects 00:17:29.696 ============================== 00:17:29.696 Admin Commands 00:17:29.696 -------------- 00:17:29.696 Get Log Page (02h): Supported 00:17:29.696 Identify (06h): Supported 00:17:29.696 Abort (08h): Supported 00:17:29.696 Set Features (09h): Supported 00:17:29.696 Get Features (0Ah): Supported 00:17:29.696 Asynchronous Event Request (0Ch): Supported 00:17:29.696 Keep Alive (18h): Supported 00:17:29.696 I/O Commands 00:17:29.696 ------------ 00:17:29.697 Flush (00h): Supported 00:17:29.697 Write (01h): Supported LBA-Change 00:17:29.697 Read (02h): Supported 00:17:29.697 Write Zeroes (08h): Supported LBA-Change 00:17:29.697 Dataset Management (09h): Supported 00:17:29.697 00:17:29.697 Error Log 00:17:29.697 ========= 00:17:29.697 Entry: 0 00:17:29.697 Error Count: 0x3 00:17:29.697 Submission Queue Id: 0x0 00:17:29.697 Command Id: 0x5 00:17:29.697 Phase Bit: 0 00:17:29.697 Status Code: 0x2 00:17:29.697 Status Code Type: 0x0 00:17:29.697 Do Not Retry: 1 00:17:29.697 Error Location: 0x28 00:17:29.697 LBA: 0x0 00:17:29.697 Namespace: 0x0 00:17:29.697 Vendor Log Page: 0x0 00:17:29.697 ----------- 00:17:29.697 Entry: 1 00:17:29.697 Error Count: 0x2 00:17:29.697 Submission Queue Id: 0x0 00:17:29.697 Command Id: 0x5 00:17:29.697 Phase Bit: 0 00:17:29.697 Status Code: 0x2 00:17:29.697 Status Code Type: 0x0 00:17:29.697 Do Not Retry: 1 00:17:29.697 Error Location: 0x28 00:17:29.697 LBA: 0x0 00:17:29.697 Namespace: 0x0 00:17:29.697 Vendor Log Page: 0x0 00:17:29.697 ----------- 00:17:29.697 Entry: 2 00:17:29.697 Error Count: 0x1 00:17:29.697 Submission Queue Id: 0x0 00:17:29.697 Command Id: 0x4 00:17:29.697 Phase Bit: 0 00:17:29.697 Status Code: 0x2 00:17:29.697 Status Code Type: 0x0 00:17:29.697 Do Not Retry: 1 00:17:29.697 Error Location: 0x28 00:17:29.697 LBA: 0x0 00:17:29.697 Namespace: 0x0 00:17:29.697 Vendor Log Page: 0x0 00:17:29.697 00:17:29.697 Number of Queues 00:17:29.697 ================ 00:17:29.697 Number of I/O Submission Queues: 128 00:17:29.697 Number of I/O Completion Queues: 128 00:17:29.697 00:17:29.697 ZNS Specific Controller Data 00:17:29.697 ============================ 00:17:29.697 Zone Append Size Limit: 0 00:17:29.697 00:17:29.697 00:17:29.697 Active Namespaces 00:17:29.697 ================= 00:17:29.697 get_feature(0x05) failed 00:17:29.697 Namespace ID:1 00:17:29.697 Command Set Identifier: NVM (00h) 00:17:29.697 Deallocate: Supported 00:17:29.697 Deallocated/Unwritten Error: Not Supported 00:17:29.697 Deallocated Read Value: Unknown 00:17:29.697 Deallocate in Write Zeroes: Not Supported 00:17:29.697 Deallocated Guard Field: 0xFFFF 00:17:29.697 Flush: Supported 00:17:29.697 Reservation: Not Supported 00:17:29.697 Namespace Sharing Capabilities: Multiple Controllers 00:17:29.697 Size (in LBAs): 1310720 (5GiB) 00:17:29.697 Capacity (in LBAs): 1310720 (5GiB) 00:17:29.697 Utilization (in LBAs): 1310720 (5GiB) 00:17:29.697 UUID: 50b536ee-204d-4b28-834d-d930897a5ae7 00:17:29.697 Thin Provisioning: Not Supported 00:17:29.697 Per-NS Atomic Units: Yes 00:17:29.697 Atomic Boundary Size (Normal): 0 00:17:29.697 Atomic Boundary Size (PFail): 0 00:17:29.697 Atomic Boundary Offset: 0 00:17:29.697 NGUID/EUI64 Never Reused: No 00:17:29.697 ANA group ID: 1 00:17:29.697 Namespace Write Protected: No 00:17:29.697 Number of LBA Formats: 1 00:17:29.697 Current LBA Format: LBA Format #00 00:17:29.697 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:29.697 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.697 rmmod nvme_tcp 00:17:29.697 rmmod nvme_fabrics 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.697 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:29.975 12:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:30.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:30.542 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:30.801 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:30.801 00:17:30.801 real 0m2.864s 00:17:30.801 user 0m0.967s 00:17:30.801 sys 0m1.357s 00:17:30.801 12:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.801 12:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.801 ************************************ 00:17:30.801 END TEST nvmf_identify_kernel_target 00:17:30.801 ************************************ 00:17:30.801 12:42:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:30.801 12:42:03 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:30.801 12:42:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:30.801 12:42:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.801 12:42:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:30.801 ************************************ 00:17:30.801 START TEST nvmf_auth_host 00:17:30.801 ************************************ 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:30.801 * Looking for test storage... 00:17:30.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:30.801 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:31.058 Cannot find device "nvmf_tgt_br" 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.058 Cannot find device "nvmf_tgt_br2" 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:31.058 Cannot find device "nvmf_tgt_br" 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:31.058 Cannot find device "nvmf_tgt_br2" 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:31.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:31.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:31.058 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:31.316 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:31.316 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:31.316 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:31.316 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:31.316 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:31.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:31.316 00:17:31.316 --- 10.0.0.2 ping statistics --- 00:17:31.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.316 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:31.316 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:31.316 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:31.316 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:17:31.316 00:17:31.316 --- 10.0.0.3 ping statistics --- 00:17:31.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.317 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:31.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:31.317 00:17:31.317 --- 10.0.0.1 ping statistics --- 00:17:31.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.317 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78695 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78695 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78695 ']' 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.317 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8627d752026828b94c69c1dac7cf6436 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5kp 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8627d752026828b94c69c1dac7cf6436 0 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8627d752026828b94c69c1dac7cf6436 0 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8627d752026828b94c69c1dac7cf6436 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:32.253 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5kp 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5kp 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5kp 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=999240fefdc1195c01ca77b02437c01c6a77b4c44d4062ef314e49247e729e21 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iap 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 999240fefdc1195c01ca77b02437c01c6a77b4c44d4062ef314e49247e729e21 3 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 999240fefdc1195c01ca77b02437c01c6a77b4c44d4062ef314e49247e729e21 3 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=999240fefdc1195c01ca77b02437c01c6a77b4c44d4062ef314e49247e729e21 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:32.513 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iap 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iap 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.iap 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2c52db7a5d4d2f53a0dfcbf2f5d53b214b2bfe8b41bb71b5 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ccJ 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2c52db7a5d4d2f53a0dfcbf2f5d53b214b2bfe8b41bb71b5 0 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2c52db7a5d4d2f53a0dfcbf2f5d53b214b2bfe8b41bb71b5 0 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2c52db7a5d4d2f53a0dfcbf2f5d53b214b2bfe8b41bb71b5 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ccJ 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ccJ 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ccJ 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=351322bd2e0e7bd8d3990974af10d489a8ed951cb86898ff 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iwP 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 351322bd2e0e7bd8d3990974af10d489a8ed951cb86898ff 2 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 351322bd2e0e7bd8d3990974af10d489a8ed951cb86898ff 2 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=351322bd2e0e7bd8d3990974af10d489a8ed951cb86898ff 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iwP 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iwP 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.iwP 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8f90353a0c6295758f001849fa3cd142 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:32.513 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Vsm 00:17:32.514 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8f90353a0c6295758f001849fa3cd142 1 00:17:32.514 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8f90353a0c6295758f001849fa3cd142 1 00:17:32.514 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.514 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.514 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8f90353a0c6295758f001849fa3cd142 00:17:32.514 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:32.514 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Vsm 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Vsm 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Vsm 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=73b12956cc28e01dbebd1bf3ebb61125 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.COA 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 73b12956cc28e01dbebd1bf3ebb61125 1 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 73b12956cc28e01dbebd1bf3ebb61125 1 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=73b12956cc28e01dbebd1bf3ebb61125 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.COA 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.COA 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.COA 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fad1c6e782e61a39b82ba9fc88081f3e8547605f4a8e1572 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.X6y 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fad1c6e782e61a39b82ba9fc88081f3e8547605f4a8e1572 2 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fad1c6e782e61a39b82ba9fc88081f3e8547605f4a8e1572 2 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fad1c6e782e61a39b82ba9fc88081f3e8547605f4a8e1572 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.X6y 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.X6y 00:17:32.773 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.X6y 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de66bffd607bc6b354d928fdc3397592 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PrO 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de66bffd607bc6b354d928fdc3397592 0 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de66bffd607bc6b354d928fdc3397592 0 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de66bffd607bc6b354d928fdc3397592 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PrO 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PrO 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.PrO 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e6cec38d9c985328d2d3bad0e47e1afeefa2ca306fda6f1ed793d8e7df0d21a6 00:17:32.774 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eq2 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e6cec38d9c985328d2d3bad0e47e1afeefa2ca306fda6f1ed793d8e7df0d21a6 3 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e6cec38d9c985328d2d3bad0e47e1afeefa2ca306fda6f1ed793d8e7df0d21a6 3 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e6cec38d9c985328d2d3bad0e47e1afeefa2ca306fda6f1ed793d8e7df0d21a6 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eq2 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eq2 00:17:33.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.eq2 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78695 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78695 ']' 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.033 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5kp 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.iap ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iap 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ccJ 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.iwP ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iwP 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Vsm 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.COA ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.COA 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.X6y 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.PrO ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.PrO 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eq2 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:33.293 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:33.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:33.811 Waiting for block devices as requested 00:17:33.811 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:33.811 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:34.379 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:34.379 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:34.379 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:34.379 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:34.379 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:34.380 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:34.380 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:34.380 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:34.380 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:34.639 No valid GPT data, bailing 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:34.639 No valid GPT data, bailing 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:34.639 No valid GPT data, bailing 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:34.639 No valid GPT data, bailing 00:17:34.639 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:34.898 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -a 10.0.0.1 -t tcp -s 4420 00:17:34.899 00:17:34.899 Discovery Log Number of Records 2, Generation counter 2 00:17:34.899 =====Discovery Log Entry 0====== 00:17:34.899 trtype: tcp 00:17:34.899 adrfam: ipv4 00:17:34.899 subtype: current discovery subsystem 00:17:34.899 treq: not specified, sq flow control disable supported 00:17:34.899 portid: 1 00:17:34.899 trsvcid: 4420 00:17:34.899 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:34.899 traddr: 10.0.0.1 00:17:34.899 eflags: none 00:17:34.899 sectype: none 00:17:34.899 =====Discovery Log Entry 1====== 00:17:34.899 trtype: tcp 00:17:34.899 adrfam: ipv4 00:17:34.899 subtype: nvme subsystem 00:17:34.899 treq: not specified, sq flow control disable supported 00:17:34.899 portid: 1 00:17:34.899 trsvcid: 4420 00:17:34.899 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:34.899 traddr: 10.0.0.1 00:17:34.899 eflags: none 00:17:34.899 sectype: none 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.899 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 nvme0n1 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 nvme0n1 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.417 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.418 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.418 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.418 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.418 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.418 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.418 nvme0n1 00:17:35.418 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.418 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.418 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.418 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.677 nvme0n1 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.677 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.937 nvme0n1 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.937 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.938 nvme0n1 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.938 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.505 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.506 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.506 nvme0n1 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.506 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.765 nvme0n1 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.765 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.024 nvme0n1 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.024 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.025 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:37.025 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.025 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.025 nvme0n1 00:17:37.025 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.025 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.025 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.025 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.025 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.025 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.285 nvme0n1 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.285 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.544 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.111 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.112 nvme0n1 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.112 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.371 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.630 nvme0n1 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.630 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.631 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.631 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.631 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.631 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.631 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.631 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.631 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.631 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.631 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.890 nvme0n1 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.890 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.149 nvme0n1 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.149 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.150 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.409 nvme0n1 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.409 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.311 nvme0n1 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.311 12:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.570 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.570 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.570 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.570 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.570 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.570 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.570 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:41.570 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.570 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.571 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.830 nvme0n1 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.830 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.397 nvme0n1 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.397 12:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.656 nvme0n1 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.656 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.657 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.225 nvme0n1 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.225 12:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.791 nvme0n1 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.791 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.356 nvme0n1 00:17:44.356 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.357 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.357 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.357 12:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.357 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.357 12:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.357 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.616 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.183 nvme0n1 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.183 12:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.749 nvme0n1 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.749 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.313 nvme0n1 00:17:46.313 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.313 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.313 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.313 12:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.313 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.313 12:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.570 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.571 nvme0n1 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.571 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.829 nvme0n1 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.829 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.087 nvme0n1 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.087 nvme0n1 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.087 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.344 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.344 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.344 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.345 nvme0n1 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.345 12:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.604 nvme0n1 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.604 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.862 nvme0n1 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.862 nvme0n1 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.862 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.119 nvme0n1 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.119 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.120 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.376 nvme0n1 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.376 12:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.376 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.633 nvme0n1 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.633 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.890 nvme0n1 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.890 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.147 nvme0n1 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.147 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.404 12:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.404 nvme0n1 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.404 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.662 nvme0n1 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.662 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.921 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.922 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.922 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.922 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.922 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.922 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.922 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.922 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.922 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.922 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.187 nvme0n1 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.187 12:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.782 nvme0n1 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.782 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.041 nvme0n1 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.041 12:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 nvme0n1 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:51.608 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.609 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.866 nvme0n1 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.866 12:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.798 nvme0n1 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.798 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.365 nvme0n1 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.365 12:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.931 nvme0n1 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.931 12:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.866 nvme0n1 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.866 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.867 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.867 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.867 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:54.867 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.867 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.433 nvme0n1 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.433 12:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.433 nvme0n1 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.433 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.693 nvme0n1 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:55.693 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.694 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.953 nvme0n1 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.953 nvme0n1 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.953 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.210 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.210 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.210 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.211 nvme0n1 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.211 12:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.470 nvme0n1 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.470 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.729 nvme0n1 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.729 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.988 nvme0n1 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.988 nvme0n1 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.988 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.246 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.247 nvme0n1 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.247 12:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.505 nvme0n1 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.505 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.506 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.506 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.506 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.763 nvme0n1 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:57.763 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.764 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.021 nvme0n1 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.021 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.279 nvme0n1 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.279 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.280 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.280 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.280 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.280 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.538 12:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.538 nvme0n1 00:17:58.538 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.538 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.538 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.538 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.538 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.538 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.797 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.798 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.056 nvme0n1 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:17:59.056 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.057 12:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.623 nvme0n1 00:17:59.623 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.623 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.623 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.623 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.623 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.624 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.882 nvme0n1 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:59.882 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.883 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.883 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:59.883 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.883 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:59.883 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:59.883 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:59.883 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:59.883 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.883 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.449 nvme0n1 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.449 12:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.748 nvme0n1 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyN2Q3NTIwMjY4MjhiOTRjNjljMWRhYzdjZjY0MzZPLT4x: 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: ]] 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTk5MjQwZmVmZGMxMTk1YzAxY2E3N2IwMjQzN2MwMWM2YTc3YjRjNDRkNDA2MmVmMzE0ZTQ5MjQ3ZTcyOWUyMdzxArE=: 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.748 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.313 nvme0n1 00:18:01.313 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.313 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.313 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.313 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.313 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.313 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.571 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.571 12:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.571 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.571 12:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.571 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.571 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.571 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:01.571 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.571 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.571 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:01.571 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:01.571 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.572 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.139 nvme0n1 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.139 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGY5MDM1M2EwYzYyOTU3NThmMDAxODQ5ZmEzY2QxNDJ4zLB1: 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: ]] 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzNiMTI5NTZjYzI4ZTAxZGJlYmQxYmYzZWJiNjExMjXdPtV2: 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.140 12:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 nvme0n1 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFkMWM2ZTc4MmU2MWEzOWI4MmJhOWZjODgwODFmM2U4NTQ3NjA1ZjRhOGUxNTcyg6Wnsg==: 00:18:02.707 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: ]] 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU2NmJmZmQ2MDdiYzZiMzU0ZDkyOGZkYzMzOTc1OTLzzS3j: 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.708 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.966 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.540 nvme0n1 00:18:03.540 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.540 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.540 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.540 12:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.540 12:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTZjZWMzOGQ5Yzk4NTMyOGQyZDNiYWQwZTQ3ZTFhZmVlZmEyY2EzMDZmZGE2ZjFlZDc5M2Q4ZTdkZjBkMjFhNtKVk7c=: 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.540 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.114 nvme0n1 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmM1MmRiN2E1ZDRkMmY1M2EwZGZjYmYyZjVkNTNiMjE0YjJiZmU4YjQxYmI3MWI1X6Mc9g==: 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzUxMzIyYmQyZTBlN2JkOGQzOTkwOTc0YWYxMGQ0ODlhOGVkOTUxY2I4Njg5OGZmn2r1Mg==: 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.114 request: 00:18:04.114 { 00:18:04.114 "name": "nvme0", 00:18:04.114 "trtype": "tcp", 00:18:04.114 "traddr": "10.0.0.1", 00:18:04.114 "adrfam": "ipv4", 00:18:04.114 "trsvcid": "4420", 00:18:04.114 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:04.114 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:04.114 "prchk_reftag": false, 00:18:04.114 "prchk_guard": false, 00:18:04.114 "hdgst": false, 00:18:04.114 "ddgst": false, 00:18:04.114 "method": "bdev_nvme_attach_controller", 00:18:04.114 "req_id": 1 00:18:04.114 } 00:18:04.114 Got JSON-RPC error response 00:18:04.114 response: 00:18:04.114 { 00:18:04.114 "code": -5, 00:18:04.114 "message": "Input/output error" 00:18:04.114 } 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.114 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.374 request: 00:18:04.374 { 00:18:04.374 "name": "nvme0", 00:18:04.374 "trtype": "tcp", 00:18:04.374 "traddr": "10.0.0.1", 00:18:04.374 "adrfam": "ipv4", 00:18:04.374 "trsvcid": "4420", 00:18:04.374 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:04.374 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:04.374 "prchk_reftag": false, 00:18:04.374 "prchk_guard": false, 00:18:04.374 "hdgst": false, 00:18:04.374 "ddgst": false, 00:18:04.374 "dhchap_key": "key2", 00:18:04.374 "method": "bdev_nvme_attach_controller", 00:18:04.374 "req_id": 1 00:18:04.374 } 00:18:04.374 Got JSON-RPC error response 00:18:04.374 response: 00:18:04.374 { 00:18:04.374 "code": -5, 00:18:04.374 "message": "Input/output error" 00:18:04.374 } 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.374 request: 00:18:04.374 { 00:18:04.374 "name": "nvme0", 00:18:04.374 "trtype": "tcp", 00:18:04.374 "traddr": "10.0.0.1", 00:18:04.374 "adrfam": "ipv4", 00:18:04.374 "trsvcid": "4420", 00:18:04.374 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:04.374 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:04.374 "prchk_reftag": false, 00:18:04.374 "prchk_guard": false, 00:18:04.374 "hdgst": false, 00:18:04.374 "ddgst": false, 00:18:04.374 "dhchap_key": "key1", 00:18:04.374 "dhchap_ctrlr_key": "ckey2", 00:18:04.374 "method": "bdev_nvme_attach_controller", 00:18:04.374 "req_id": 1 00:18:04.374 } 00:18:04.374 Got JSON-RPC error response 00:18:04.374 response: 00:18:04.374 { 00:18:04.374 "code": -5, 00:18:04.374 "message": "Input/output error" 00:18:04.374 } 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.374 12:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.374 rmmod nvme_tcp 00:18:04.374 rmmod nvme_fabrics 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78695 ']' 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78695 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78695 ']' 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78695 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.374 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78695 00:18:04.633 killing process with pid 78695 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78695' 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78695 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78695 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:04.633 12:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:04.892 12:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:05.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:05.461 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:05.720 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:05.720 12:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5kp /tmp/spdk.key-null.ccJ /tmp/spdk.key-sha256.Vsm /tmp/spdk.key-sha384.X6y /tmp/spdk.key-sha512.eq2 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:05.720 12:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:05.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:05.979 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.979 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.979 00:18:05.979 real 0m35.262s 00:18:05.979 user 0m31.987s 00:18:05.979 sys 0m3.743s 00:18:05.979 12:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.979 ************************************ 00:18:05.979 12:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.979 END TEST nvmf_auth_host 00:18:05.979 ************************************ 00:18:05.979 12:42:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:05.979 12:42:38 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:18:05.979 12:42:38 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:05.979 12:42:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:05.979 12:42:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.979 12:42:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:06.237 ************************************ 00:18:06.237 START TEST nvmf_digest 00:18:06.237 ************************************ 00:18:06.237 12:42:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:06.237 * Looking for test storage... 00:18:06.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:06.237 12:42:38 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:06.237 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:06.238 Cannot find device "nvmf_tgt_br" 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:06.238 Cannot find device "nvmf_tgt_br2" 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:06.238 Cannot find device "nvmf_tgt_br" 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:06.238 Cannot find device "nvmf_tgt_br2" 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:06.238 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:06.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.497 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:06.497 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:06.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.497 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:06.497 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:06.497 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:06.497 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:06.497 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:06.497 12:42:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:06.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:06.497 00:18:06.497 --- 10.0.0.2 ping statistics --- 00:18:06.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.497 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:06.497 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:06.497 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:06.497 00:18:06.497 --- 10.0.0.3 ping statistics --- 00:18:06.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.497 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:06.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:06.497 00:18:06.497 --- 10.0.0.1 ping statistics --- 00:18:06.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.497 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.497 12:42:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.756 12:42:39 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:06.757 ************************************ 00:18:06.757 START TEST nvmf_digest_clean 00:18:06.757 ************************************ 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80262 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80262 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80262 ']' 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:06.757 12:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:06.757 [2024-07-15 12:42:39.253501] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:06.757 [2024-07-15 12:42:39.253620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.757 [2024-07-15 12:42:39.394630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.016 [2024-07-15 12:42:39.506999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.016 [2024-07-15 12:42:39.507056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.016 [2024-07-15 12:42:39.507068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.016 [2024-07-15 12:42:39.507078] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.016 [2024-07-15 12:42:39.507086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.016 [2024-07-15 12:42:39.507112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.584 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.584 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:07.584 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.584 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.584 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:07.844 [2024-07-15 12:42:40.354548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:07.844 null0 00:18:07.844 [2024-07-15 12:42:40.404806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.844 [2024-07-15 12:42:40.428934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80295 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80295 /var/tmp/bperf.sock 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80295 ']' 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.844 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:07.844 [2024-07-15 12:42:40.486836] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:07.844 [2024-07-15 12:42:40.486930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80295 ] 00:18:08.102 [2024-07-15 12:42:40.628310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.102 [2024-07-15 12:42:40.753267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.039 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.039 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:09.039 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:09.039 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:09.039 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:09.298 [2024-07-15 12:42:41.770576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:09.298 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.298 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.557 nvme0n1 00:18:09.557 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:09.557 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:09.816 Running I/O for 2 seconds... 00:18:11.719 00:18:11.719 Latency(us) 00:18:11.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.719 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:11.719 nvme0n1 : 2.01 15194.77 59.35 0.00 0.00 8417.27 2576.76 23116.33 00:18:11.719 =================================================================================================================== 00:18:11.719 Total : 15194.77 59.35 0.00 0.00 8417.27 2576.76 23116.33 00:18:11.719 0 00:18:11.719 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:11.719 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:11.719 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:11.719 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:11.720 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:11.720 | select(.opcode=="crc32c") 00:18:11.720 | "\(.module_name) \(.executed)"' 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80295 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80295 ']' 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80295 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80295 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80295' 00:18:11.978 killing process with pid 80295 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80295 00:18:11.978 Received shutdown signal, test time was about 2.000000 seconds 00:18:11.978 00:18:11.978 Latency(us) 00:18:11.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.978 =================================================================================================================== 00:18:11.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.978 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80295 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80355 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80355 /var/tmp/bperf.sock 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80355 ']' 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.237 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:12.496 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:12.496 Zero copy mechanism will not be used. 00:18:12.496 [2024-07-15 12:42:44.939700] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:12.496 [2024-07-15 12:42:44.939814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80355 ] 00:18:12.496 [2024-07-15 12:42:45.078215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.754 [2024-07-15 12:42:45.193471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.322 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.322 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:13.322 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:13.322 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:13.322 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:13.580 [2024-07-15 12:42:46.173935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:13.580 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.580 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:14.146 nvme0n1 00:18:14.146 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:14.146 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:14.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:14.146 Zero copy mechanism will not be used. 00:18:14.146 Running I/O for 2 seconds... 00:18:16.062 00:18:16.062 Latency(us) 00:18:16.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.062 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:16.062 nvme0n1 : 2.00 7850.68 981.33 0.00 0.00 2034.75 1846.92 3053.38 00:18:16.062 =================================================================================================================== 00:18:16.062 Total : 7850.68 981.33 0.00 0.00 2034.75 1846.92 3053.38 00:18:16.062 0 00:18:16.062 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:16.062 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:16.062 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:16.062 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:16.062 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:16.062 | select(.opcode=="crc32c") 00:18:16.062 | "\(.module_name) \(.executed)"' 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80355 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80355 ']' 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80355 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80355 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:16.370 killing process with pid 80355 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80355' 00:18:16.370 Received shutdown signal, test time was about 2.000000 seconds 00:18:16.370 00:18:16.370 Latency(us) 00:18:16.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.370 =================================================================================================================== 00:18:16.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80355 00:18:16.370 12:42:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80355 00:18:16.629 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:16.629 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:16.629 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:16.629 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:16.629 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:16.629 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:16.629 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:16.629 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80410 00:18:16.630 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:16.630 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80410 /var/tmp/bperf.sock 00:18:16.630 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80410 ']' 00:18:16.630 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:16.630 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:16.630 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:16.630 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.630 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:16.630 [2024-07-15 12:42:49.234967] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:16.630 [2024-07-15 12:42:49.235043] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80410 ] 00:18:16.888 [2024-07-15 12:42:49.369964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.888 [2024-07-15 12:42:49.485416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.824 12:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.824 12:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:17.824 12:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:17.824 12:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:17.824 12:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:18.083 [2024-07-15 12:42:50.518104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:18.083 12:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:18.083 12:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:18.341 nvme0n1 00:18:18.341 12:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:18.341 12:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:18.341 Running I/O for 2 seconds... 00:18:20.870 00:18:20.870 Latency(us) 00:18:20.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.870 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.870 nvme0n1 : 2.01 16186.18 63.23 0.00 0.00 7900.84 6523.81 14537.08 00:18:20.870 =================================================================================================================== 00:18:20.870 Total : 16186.18 63.23 0.00 0.00 7900.84 6523.81 14537.08 00:18:20.870 0 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:20.870 | select(.opcode=="crc32c") 00:18:20.870 | "\(.module_name) \(.executed)"' 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80410 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80410 ']' 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80410 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80410 00:18:20.870 killing process with pid 80410 00:18:20.870 Received shutdown signal, test time was about 2.000000 seconds 00:18:20.870 00:18:20.870 Latency(us) 00:18:20.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.870 =================================================================================================================== 00:18:20.870 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80410' 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80410 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80410 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80476 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80476 /var/tmp/bperf.sock 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80476 ']' 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:20.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.870 12:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:21.127 [2024-07-15 12:42:53.586337] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:21.127 [2024-07-15 12:42:53.586680] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80476 ] 00:18:21.127 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:21.127 Zero copy mechanism will not be used. 00:18:21.127 [2024-07-15 12:42:53.722224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.433 [2024-07-15 12:42:53.838802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.999 12:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.999 12:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:21.999 12:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:21.999 12:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:21.999 12:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:22.257 [2024-07-15 12:42:54.846296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:22.257 12:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:22.257 12:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:22.515 nvme0n1 00:18:22.515 12:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:22.515 12:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:22.774 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:22.774 Zero copy mechanism will not be used. 00:18:22.774 Running I/O for 2 seconds... 00:18:24.676 00:18:24.676 Latency(us) 00:18:24.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.676 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:24.676 nvme0n1 : 2.00 6193.81 774.23 0.00 0.00 2577.47 1534.14 4230.05 00:18:24.676 =================================================================================================================== 00:18:24.676 Total : 6193.81 774.23 0.00 0.00 2577.47 1534.14 4230.05 00:18:24.676 0 00:18:24.676 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:24.676 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:24.676 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:24.676 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:24.676 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:24.676 | select(.opcode=="crc32c") 00:18:24.676 | "\(.module_name) \(.executed)"' 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80476 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80476 ']' 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80476 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80476 00:18:25.243 killing process with pid 80476 00:18:25.243 Received shutdown signal, test time was about 2.000000 seconds 00:18:25.243 00:18:25.243 Latency(us) 00:18:25.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.243 =================================================================================================================== 00:18:25.243 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80476' 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80476 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80476 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80262 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80262 ']' 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80262 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80262 00:18:25.243 killing process with pid 80262 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80262' 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80262 00:18:25.243 12:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80262 00:18:25.502 00:18:25.502 real 0m18.935s 00:18:25.502 user 0m36.572s 00:18:25.502 sys 0m4.888s 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:25.502 ************************************ 00:18:25.502 END TEST nvmf_digest_clean 00:18:25.502 ************************************ 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:25.502 ************************************ 00:18:25.502 START TEST nvmf_digest_error 00:18:25.502 ************************************ 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:25.502 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:25.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.762 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80559 00:18:25.762 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80559 00:18:25.762 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:25.762 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80559 ']' 00:18:25.762 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.762 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.762 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.762 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.762 12:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:25.762 [2024-07-15 12:42:58.245488] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:25.762 [2024-07-15 12:42:58.245618] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.762 [2024-07-15 12:42:58.386616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.022 [2024-07-15 12:42:58.505049] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.022 [2024-07-15 12:42:58.505104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.022 [2024-07-15 12:42:58.505132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.022 [2024-07-15 12:42:58.505157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.022 [2024-07-15 12:42:58.505165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.022 [2024-07-15 12:42:58.505191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.589 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.589 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:26.589 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:26.589 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:26.589 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.590 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.590 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:26.590 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.590 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.590 [2024-07-15 12:42:59.225976] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:26.590 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.590 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:26.590 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:26.590 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.590 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.848 [2024-07-15 12:42:59.291664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:26.848 null0 00:18:26.848 [2024-07-15 12:42:59.343809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.848 [2024-07-15 12:42:59.367950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.848 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.848 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:26.848 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:26.848 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:26.848 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80591 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80591 /var/tmp/bperf.sock 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80591 ']' 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:26.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.849 12:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.849 [2024-07-15 12:42:59.422737] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:26.849 [2024-07-15 12:42:59.423066] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80591 ] 00:18:27.107 [2024-07-15 12:42:59.558988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.107 [2024-07-15 12:42:59.676649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.107 [2024-07-15 12:42:59.730311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:28.041 12:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:28.607 nvme0n1 00:18:28.608 12:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:28.608 12:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.608 12:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:28.608 12:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.608 12:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:28.608 12:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:28.608 Running I/O for 2 seconds... 00:18:28.608 [2024-07-15 12:43:01.189834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.608 [2024-07-15 12:43:01.189895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.608 [2024-07-15 12:43:01.189913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.608 [2024-07-15 12:43:01.206985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.608 [2024-07-15 12:43:01.207030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.608 [2024-07-15 12:43:01.207046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.608 [2024-07-15 12:43:01.224049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.608 [2024-07-15 12:43:01.224091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.608 [2024-07-15 12:43:01.224107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.608 [2024-07-15 12:43:01.241034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.608 [2024-07-15 12:43:01.241088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.608 [2024-07-15 12:43:01.241103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.608 [2024-07-15 12:43:01.257944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.608 [2024-07-15 12:43:01.257983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.608 [2024-07-15 12:43:01.258014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.608 [2024-07-15 12:43:01.275225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.608 [2024-07-15 12:43:01.275271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.608 [2024-07-15 12:43:01.275287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.292318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.292365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.292382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.309301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.309346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.309361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.326915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.326973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.326989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.344264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.344344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.344361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.361479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.361518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.361549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.378907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.378967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.378982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.396152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.396193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.396208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.413336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.413374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.413405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.430339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.430377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.430408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.447616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.447660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.447694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.464286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.464326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.464358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.481166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.481208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.481239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.497988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.498026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.498072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.514620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.514658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.514688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.531192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.531230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.531261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.867 [2024-07-15 12:43:01.548261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:28.867 [2024-07-15 12:43:01.548333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.867 [2024-07-15 12:43:01.548350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.565188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.565231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.565262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.581968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.582007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.582038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.598652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.598692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.598707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.615213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.615251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.615281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.631968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.632012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.632027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.648685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.648742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.648759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.665547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.665586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.665600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.682590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.682628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.682659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.699915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.699960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.699977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.716859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.716899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.716914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.733473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.733511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.733542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.750000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.750037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.750066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.766900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.766938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.766952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.783943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.783985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.784000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.126 [2024-07-15 12:43:01.800671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.126 [2024-07-15 12:43:01.800713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.126 [2024-07-15 12:43:01.800747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.385 [2024-07-15 12:43:01.817857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.385 [2024-07-15 12:43:01.817902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.385 [2024-07-15 12:43:01.817917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.385 [2024-07-15 12:43:01.834858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.385 [2024-07-15 12:43:01.834899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.385 [2024-07-15 12:43:01.834914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.385 [2024-07-15 12:43:01.851673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.385 [2024-07-15 12:43:01.851714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.385 [2024-07-15 12:43:01.851745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.385 [2024-07-15 12:43:01.868653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.385 [2024-07-15 12:43:01.868693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:01.868708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:01.885548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:01.885587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:01.885618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:01.902336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:01.902391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:01.902406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:01.919284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:01.919323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:01.919338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:01.936185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:01.936222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:01.936237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:01.953136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:01.953179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:01.953195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:01.970115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:01.970157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:01.970189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:01.987078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:01.987131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:01.987146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:02.004116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:02.004152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:02.004166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:02.021133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:02.021169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:02.021182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:02.037912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:02.037962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:02.037976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.386 [2024-07-15 12:43:02.054945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.386 [2024-07-15 12:43:02.054980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.386 [2024-07-15 12:43:02.054994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.072208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.072264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.072289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.089497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.089551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.089566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.106665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.106717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.106732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.123530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.123581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.123595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.140111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.140162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.140176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.156812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.156847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.156860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.173612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.173646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.173660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.190415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.190453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.190466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.207368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.207402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.207416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.224459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.224511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.224527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.241324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.241362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.241376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.265670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.265730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.265757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.282480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.282534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.282548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.299410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.299466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.299480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.645 [2024-07-15 12:43:02.316149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.645 [2024-07-15 12:43:02.316202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.645 [2024-07-15 12:43:02.316216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.905 [2024-07-15 12:43:02.333288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.905 [2024-07-15 12:43:02.333336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.905 [2024-07-15 12:43:02.333352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.905 [2024-07-15 12:43:02.350257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.905 [2024-07-15 12:43:02.350297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.905 [2024-07-15 12:43:02.350312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.905 [2024-07-15 12:43:02.367136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.905 [2024-07-15 12:43:02.367183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.905 [2024-07-15 12:43:02.367198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.905 [2024-07-15 12:43:02.383993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.905 [2024-07-15 12:43:02.384029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.905 [2024-07-15 12:43:02.384043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.905 [2024-07-15 12:43:02.400858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.905 [2024-07-15 12:43:02.400893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.400907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.417748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.417812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.417827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.434851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.434886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.434900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.451811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.451847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.451861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.469006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.469047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.469062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.485981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.486025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.486040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.502891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.502935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.502950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.519981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.520034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.520049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.537130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.537176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.537192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.554118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.554163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.554178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.571069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.571118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.571132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.906 [2024-07-15 12:43:02.588152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:29.906 [2024-07-15 12:43:02.588207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.906 [2024-07-15 12:43:02.588223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.605204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.605274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.605290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.622318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.622371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.622387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.639660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.639722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.639751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.656850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.656909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.656925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.673854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.673908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.673923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.690806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.690847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.690862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.707853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.707904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.707919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.725105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.725160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.725174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.742485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.742557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.742574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.759543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.759604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.759619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.776347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.776406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.776420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.793114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.793178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.793193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.809999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.810065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.810080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.827035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.827084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.827099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.165 [2024-07-15 12:43:02.844053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.165 [2024-07-15 12:43:02.844102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.165 [2024-07-15 12:43:02.844116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.424 [2024-07-15 12:43:02.861268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.424 [2024-07-15 12:43:02.861330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.424 [2024-07-15 12:43:02.861345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.424 [2024-07-15 12:43:02.878450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.424 [2024-07-15 12:43:02.878524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.424 [2024-07-15 12:43:02.878540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.424 [2024-07-15 12:43:02.895642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.424 [2024-07-15 12:43:02.895728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.424 [2024-07-15 12:43:02.895753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.424 [2024-07-15 12:43:02.912972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.424 [2024-07-15 12:43:02.913021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.424 [2024-07-15 12:43:02.913035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.424 [2024-07-15 12:43:02.929979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.424 [2024-07-15 12:43:02.930041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.424 [2024-07-15 12:43:02.930055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.424 [2024-07-15 12:43:02.947178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.424 [2024-07-15 12:43:02.947222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.424 [2024-07-15 12:43:02.947236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.424 [2024-07-15 12:43:02.964072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.424 [2024-07-15 12:43:02.964128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.424 [2024-07-15 12:43:02.964143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.424 [2024-07-15 12:43:02.981278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.425 [2024-07-15 12:43:02.981323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.425 [2024-07-15 12:43:02.981339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.425 [2024-07-15 12:43:02.998377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.425 [2024-07-15 12:43:02.998435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.425 [2024-07-15 12:43:02.998451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.425 [2024-07-15 12:43:03.015472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.425 [2024-07-15 12:43:03.015521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.425 [2024-07-15 12:43:03.015538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.425 [2024-07-15 12:43:03.032386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.425 [2024-07-15 12:43:03.032432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.425 [2024-07-15 12:43:03.032454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.425 [2024-07-15 12:43:03.049393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.425 [2024-07-15 12:43:03.049434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.425 [2024-07-15 12:43:03.049450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.425 [2024-07-15 12:43:03.066040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.425 [2024-07-15 12:43:03.066096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.425 [2024-07-15 12:43:03.066110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.425 [2024-07-15 12:43:03.082711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.425 [2024-07-15 12:43:03.082801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.425 [2024-07-15 12:43:03.082816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.425 [2024-07-15 12:43:03.099286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.425 [2024-07-15 12:43:03.099358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.425 [2024-07-15 12:43:03.099372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.683 [2024-07-15 12:43:03.116968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.683 [2024-07-15 12:43:03.117026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.683 [2024-07-15 12:43:03.117042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.683 [2024-07-15 12:43:03.133937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.683 [2024-07-15 12:43:03.133995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.683 [2024-07-15 12:43:03.134011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.683 [2024-07-15 12:43:03.151113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.683 [2024-07-15 12:43:03.151179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.683 [2024-07-15 12:43:03.151194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.683 [2024-07-15 12:43:03.167443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde020) 00:18:30.683 [2024-07-15 12:43:03.167507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.683 [2024-07-15 12:43:03.167521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.683 00:18:30.683 Latency(us) 00:18:30.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.683 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:30.683 nvme0n1 : 2.01 14853.36 58.02 0.00 0.00 8610.54 7923.90 32887.16 00:18:30.683 =================================================================================================================== 00:18:30.683 Total : 14853.36 58.02 0.00 0.00 8610.54 7923.90 32887.16 00:18:30.683 0 00:18:30.683 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:30.683 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:30.683 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:30.683 | .driver_specific 00:18:30.683 | .nvme_error 00:18:30.683 | .status_code 00:18:30.683 | .command_transient_transport_error' 00:18:30.683 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80591 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80591 ']' 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80591 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80591 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:30.942 killing process with pid 80591 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80591' 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80591 00:18:30.942 Received shutdown signal, test time was about 2.000000 seconds 00:18:30.942 00:18:30.942 Latency(us) 00:18:30.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.942 =================================================================================================================== 00:18:30.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.942 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80591 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80650 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80650 /var/tmp/bperf.sock 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80650 ']' 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.200 12:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.200 [2024-07-15 12:43:03.755660] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:31.200 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:31.200 Zero copy mechanism will not be used. 00:18:31.201 [2024-07-15 12:43:03.756457] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80650 ] 00:18:31.458 [2024-07-15 12:43:03.897390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.458 [2024-07-15 12:43:04.013125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.458 [2024-07-15 12:43:04.067157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:32.028 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.028 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:32.028 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:32.028 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:32.296 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:32.296 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.296 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.296 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.296 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:32.296 12:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:32.864 nvme0n1 00:18:32.864 12:43:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:32.864 12:43:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.864 12:43:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.864 12:43:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.864 12:43:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:32.864 12:43:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:32.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:32.864 Zero copy mechanism will not be used. 00:18:32.864 Running I/O for 2 seconds... 00:18:32.864 [2024-07-15 12:43:05.410642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.864 [2024-07-15 12:43:05.410699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.864 [2024-07-15 12:43:05.410715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.864 [2024-07-15 12:43:05.415592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.864 [2024-07-15 12:43:05.415629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.864 [2024-07-15 12:43:05.415644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.864 [2024-07-15 12:43:05.419820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.864 [2024-07-15 12:43:05.419856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.864 [2024-07-15 12:43:05.419870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.864 [2024-07-15 12:43:05.423975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.864 [2024-07-15 12:43:05.424011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.864 [2024-07-15 12:43:05.424025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.864 [2024-07-15 12:43:05.428206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.864 [2024-07-15 12:43:05.428242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.864 [2024-07-15 12:43:05.428256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.864 [2024-07-15 12:43:05.432499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.864 [2024-07-15 12:43:05.432534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.864 [2024-07-15 12:43:05.432548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.864 [2024-07-15 12:43:05.436703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.864 [2024-07-15 12:43:05.436749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.864 [2024-07-15 12:43:05.436763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.864 [2024-07-15 12:43:05.440788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.864 [2024-07-15 12:43:05.440819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.864 [2024-07-15 12:43:05.440833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.864 [2024-07-15 12:43:05.444956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.864 [2024-07-15 12:43:05.444992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.864 [2024-07-15 12:43:05.445005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.449102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.449136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.449149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.453257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.453293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.453306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.457473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.457508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.457522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.461663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.461699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.461712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.465850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.465899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.465912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.470050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.470100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.470113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.474313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.474347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.474361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.478491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.478542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.478555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.482620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.482669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.482682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.486900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.486933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.486947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.491157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.491192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.491205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.495377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.495411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.495425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.499596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.499630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.499644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.503703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.503749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.503764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.507860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.507894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.507908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.512261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.512303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.512318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.516643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.516683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.516698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.520900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.520939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.520953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.525249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.525302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.525316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.529431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.529482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.529495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.533615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.533667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.533681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.537915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.537965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.537979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.865 [2024-07-15 12:43:05.542019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:32.865 [2024-07-15 12:43:05.542070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.865 [2024-07-15 12:43:05.542083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.125 [2024-07-15 12:43:05.546604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.125 [2024-07-15 12:43:05.546644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.125 [2024-07-15 12:43:05.546658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.125 [2024-07-15 12:43:05.551202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.125 [2024-07-15 12:43:05.551241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.125 [2024-07-15 12:43:05.551256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.125 [2024-07-15 12:43:05.555552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.125 [2024-07-15 12:43:05.555606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.125 [2024-07-15 12:43:05.555621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.125 [2024-07-15 12:43:05.559861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.125 [2024-07-15 12:43:05.559912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.125 [2024-07-15 12:43:05.559927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.125 [2024-07-15 12:43:05.564020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.125 [2024-07-15 12:43:05.564055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.125 [2024-07-15 12:43:05.564069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.125 [2024-07-15 12:43:05.568270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.568321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.568335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.572569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.572605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.572618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.576819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.576854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.576868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.580955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.580990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.581003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.585154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.585188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.585202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.589649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.589690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.589705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.593975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.594029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.594043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.598174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.598226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.598240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.602469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.602522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.602536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.606763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.606825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.606840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.610904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.610939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.610952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.615040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.615075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.615089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.619170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.619205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.619218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.623405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.623441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.623455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.627609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.627644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.627657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.631771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.631806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.631819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.635956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.635991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.636004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.640141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.640175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.640189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.644367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.644402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.644416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.648601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.648636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.648650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.652853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.652901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.652915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.657013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.657061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.657075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.661174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.661224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.661237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.665422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.665472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.665485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.669727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.669770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.669784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.673962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.673996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.674010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.678171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.678205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.678218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.682439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.682474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.682488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.686647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.126 [2024-07-15 12:43:05.686697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.126 [2024-07-15 12:43:05.686710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.126 [2024-07-15 12:43:05.690891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.690940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.690954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.695178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.695212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.695225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.699442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.699492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.699505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.703624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.703659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.703673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.707828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.707862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.707875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.711947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.711982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.711995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.716103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.716137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.716151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.720358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.720393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.720406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.724567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.724602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.724615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.728801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.728839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.728852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.733223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.733307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.733321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.737719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.737781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.737795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.742076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.742112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.742126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.746272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.746306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.746320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.750502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.750537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.750550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.754570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.754605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.754618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.758817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.758849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.758863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.763154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.763218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.763231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.767434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.767468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.767482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.771840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.771874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.771888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.776040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.776073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.776087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.780241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.780291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.780303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.784492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.784525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.784539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.788724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.788768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.788781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.793046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.793079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.793093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.797335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.797384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.797397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.801716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.801775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.801789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.127 [2024-07-15 12:43:05.806344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.127 [2024-07-15 12:43:05.806385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-07-15 12:43:05.806400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.810782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.810834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.810848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.815181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.815236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.815250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.819430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.819481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.819495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.823614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.823665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.823678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.827763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.827813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.827826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.831931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.831982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.832011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.836238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.836288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.836302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.840613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.840647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.840661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.845084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.845123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.845137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.849405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.849445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.849460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.853872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.853914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.853928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.858239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.858305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.858319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.862570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.862621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.862635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.866782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.866816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.866829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.870950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.870984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.870997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.875167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.875201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.875214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.879451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.879501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.879531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.883699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.883760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.883774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.888027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.888077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.888091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.892326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.892376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.892389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.896669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.896704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.896718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.901090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.901123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-07-15 12:43:05.901136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.388 [2024-07-15 12:43:05.905379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.388 [2024-07-15 12:43:05.905440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.905454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.909657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.909705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.909718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.913870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.913917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.913930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.918226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.918261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.918275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.922359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.922394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.922407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.926635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.926670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.926683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.930938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.930974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.930987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.935094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.935128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.935142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.939197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.939231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.939244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.943551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.943603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.943617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.947822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.947857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.947870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.952292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.952334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.952349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.956682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.956724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.956753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.961051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.961088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.961103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.965286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.965337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.965351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.969497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.969532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.969545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.973706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.973755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.973769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.977928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.977977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.977990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.982121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.982172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.982186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.986471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.986506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.986520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.990658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.990709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.990723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.994963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.995000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.995013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:05.999243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:05.999308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:05.999322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:06.003623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:06.003674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:06.003688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:06.007841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:06.007875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:06.007888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:06.012010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:06.012058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:06.012072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:06.016185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:06.016235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:06.016249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:06.020432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:06.020493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:06.020507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:06.024643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:06.024678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-07-15 12:43:06.024695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.389 [2024-07-15 12:43:06.028859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.389 [2024-07-15 12:43:06.028893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.028906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.390 [2024-07-15 12:43:06.033028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.390 [2024-07-15 12:43:06.033063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.033076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.390 [2024-07-15 12:43:06.037206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.390 [2024-07-15 12:43:06.037241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.037255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.390 [2024-07-15 12:43:06.041384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.390 [2024-07-15 12:43:06.041418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.041431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.390 [2024-07-15 12:43:06.045493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.390 [2024-07-15 12:43:06.045527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.045541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.390 [2024-07-15 12:43:06.049642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.390 [2024-07-15 12:43:06.049676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.049690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.390 [2024-07-15 12:43:06.053824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.390 [2024-07-15 12:43:06.053857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.053870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.390 [2024-07-15 12:43:06.057990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.390 [2024-07-15 12:43:06.058024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.058037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.390 [2024-07-15 12:43:06.062268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.390 [2024-07-15 12:43:06.062302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.062315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.390 [2024-07-15 12:43:06.066675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.390 [2024-07-15 12:43:06.066717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.390 [2024-07-15 12:43:06.066749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.071164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.071204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.071218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.075468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.075504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.075519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.079764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.079802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.079816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.084043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.084079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.084093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.088110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.088145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.088159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.092220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.092255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.092268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.096407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.096442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.096470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.100598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.100633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.100646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.105017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.105056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.105071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.109496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.109536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.109551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.113812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.113843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.113858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.118071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.118108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.118121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.122220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.122255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.122268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.126460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.126495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.126509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.130722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.130785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.130799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.135055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.135090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.135105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.139255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.139290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.139303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.143472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.143538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.143552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.147690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.147725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.147753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.650 [2024-07-15 12:43:06.151826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.650 [2024-07-15 12:43:06.151860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.650 [2024-07-15 12:43:06.151873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.155922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.155955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.155969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.160121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.160156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.160169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.164329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.164365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.164378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.168463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.168497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.168510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.172670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.172705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.172718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.176890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.176927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.176940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.181179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.181214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.181227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.185439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.185474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.185488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.189693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.189739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.189754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.194043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.194078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.194092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.198242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.198276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.198289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.202430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.202465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.202478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.206642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.206694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.206708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.211010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.211047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.211061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.215162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.215196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.215210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.219374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.219408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.219421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.223663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.223711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.223724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.227936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.227984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.227997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.232219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.232253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.232266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.236321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.236355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.236368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.240484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.240517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.240530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.244805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.244838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.244851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.249006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.249040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.249052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.253263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.253302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.253316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.257353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.257388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.257401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.261688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.261723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.261749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.265875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.265910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.265923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.270009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.651 [2024-07-15 12:43:06.270043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.651 [2024-07-15 12:43:06.270056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.651 [2024-07-15 12:43:06.274167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.274201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.274214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.278331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.278365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.278378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.282445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.282480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.282493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.286633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.286668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.286682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.290873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.290906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.290919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.295120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.295154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.295168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.299331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.299366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.299380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.303603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.303637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.303650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.307871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.307905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.307918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.312183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.312218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.312231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.316386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.316421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.316434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.320586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.320620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.320634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.324780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.324812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.324825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.652 [2024-07-15 12:43:06.329064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.652 [2024-07-15 12:43:06.329103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.652 [2024-07-15 12:43:06.329117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.333622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.333664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.333679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.337851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.337890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.337904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.342223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.342262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.342277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.346462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.346499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.346513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.350557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.350592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.350605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.354804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.354853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.354869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.359122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.359158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.359171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.363429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.363469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.363484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.367906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.367945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.367960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.372140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.372177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.912 [2024-07-15 12:43:06.372191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.912 [2024-07-15 12:43:06.376417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.912 [2024-07-15 12:43:06.376463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.376478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.380667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.380703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.380716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.384918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.384953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.384968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.389015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.389050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.389064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.393225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.393260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.393274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.397503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.397538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.397552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.401852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.401887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.401901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.406129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.406164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.406177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.410405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.410440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.410454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.414709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.414769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.414783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.419044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.419079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.419092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.423277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.423312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.423326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.427614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.427665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.427678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.431794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.431828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.431842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.436058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.436092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.436105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.440261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.440311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.440326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.444804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.444838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.444851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.449065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.449099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.449113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.453278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.453328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.453341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.457495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.457544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.457558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.461815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.461851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.461865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.465909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.465960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.465974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.470100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.470134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.470147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.474316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.474351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.474365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.478595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.478636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.478650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.482995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.483038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.483052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.487461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.487515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.487530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.491706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.491770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.491785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.496074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.496126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.913 [2024-07-15 12:43:06.496140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.913 [2024-07-15 12:43:06.500387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.913 [2024-07-15 12:43:06.500438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.500461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.504750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.504784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.504798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.509291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.509327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.509340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.513632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.513669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.513682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.517901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.517952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.517965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.522157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.522207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.522221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.526533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.526583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.526597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.530889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.530938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.530969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.535135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.535170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.535184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.539404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.539455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.539469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.543585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.543635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.543648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.547839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.547889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.547903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.552194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.552230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.552243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.556519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.556553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.556566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.560729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.560770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.560784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.565155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.565209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.565224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.569588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.569641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.569655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.573917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.573989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.574003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.578297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.578349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.578363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.582555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.582606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.582621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.586867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.586901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.586915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.914 [2024-07-15 12:43:06.591523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:33.914 [2024-07-15 12:43:06.591580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.914 [2024-07-15 12:43:06.591595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.174 [2024-07-15 12:43:06.596157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.174 [2024-07-15 12:43:06.596197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.174 [2024-07-15 12:43:06.596213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.174 [2024-07-15 12:43:06.600611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.174 [2024-07-15 12:43:06.600661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.174 [2024-07-15 12:43:06.600683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.174 [2024-07-15 12:43:06.604927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.174 [2024-07-15 12:43:06.604965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.604988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.609310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.609360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.609374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.613622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.613656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.613670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.617883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.617932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.617946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.622357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.622398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.622412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.626841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.626879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.626895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.631181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.631219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.631232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.635543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.635579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.635593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.639829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.639879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.639893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.644069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.644120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.644134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.648310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.648348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.648362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.652574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.652608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.652621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.656995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.657031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.657044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.661265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.661314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.661328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.665553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.665603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.665616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.669853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.669887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.669900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.674084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.674119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.674132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.678360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.678396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.678410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.682597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.682647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.682660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.686833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.686883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.686897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.690984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.691032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.691046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.695199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.695250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.695264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.699314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.699362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.699375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.703402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.703451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.703480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.707538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.707586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.707600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.711674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.711723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.711737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.715885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.715934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.715948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.719972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.720019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.720032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.724018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.175 [2024-07-15 12:43:06.724065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.175 [2024-07-15 12:43:06.724078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.175 [2024-07-15 12:43:06.728164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.728211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.728224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.732360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.732409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.732422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.736577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.736611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.736624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.740798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.740831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.740845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.744977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.745026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.745040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.749099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.749148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.749161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.753272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.753322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.753336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.757653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.757702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.757716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.761916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.761949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.761961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.766142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.766176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.766189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.770333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.770367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.770381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.774501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.774535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.774548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.778783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.778817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.778830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.783026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.783060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.783073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.787329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.787377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.787390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.791556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.791605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.791618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.795819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.795868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.795882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.799973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.800021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.800034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.804043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.804091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.804104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.808248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.808298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.808312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.812508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.812543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.812557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.816784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.816847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.816862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.821073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.821121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.821134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.825437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.825486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.825499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.829568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.829602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.829615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.833807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.833840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.833854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.837886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.837918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.837931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.842072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.842106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.842119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.846251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.176 [2024-07-15 12:43:06.846295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.176 [2024-07-15 12:43:06.846309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.176 [2024-07-15 12:43:06.850404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.177 [2024-07-15 12:43:06.850439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.177 [2024-07-15 12:43:06.850452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.177 [2024-07-15 12:43:06.854657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.177 [2024-07-15 12:43:06.854696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.177 [2024-07-15 12:43:06.854711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.436 [2024-07-15 12:43:06.859159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.436 [2024-07-15 12:43:06.859214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.436 [2024-07-15 12:43:06.859229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.436 [2024-07-15 12:43:06.863595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.436 [2024-07-15 12:43:06.863634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.436 [2024-07-15 12:43:06.863648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.436 [2024-07-15 12:43:06.867799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.436 [2024-07-15 12:43:06.867850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.436 [2024-07-15 12:43:06.867864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.436 [2024-07-15 12:43:06.871903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.436 [2024-07-15 12:43:06.871954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.436 [2024-07-15 12:43:06.871968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.436 [2024-07-15 12:43:06.876144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.876195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.876209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.880661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.880701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.880715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.885017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.885071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.885086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.889299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.889352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.889366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.893591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.893643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.893656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.897893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.897941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.897955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.902151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.902186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.902199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.906412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.906447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.906461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.910627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.910677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.910690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.914741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.914790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.914803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.919049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.919083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.919112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.923289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.923323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.923337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.927618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.927653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.927666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.931690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.931740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.931764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.935803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.935836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.935850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.940000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.940049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.940062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.944218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.944267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.944280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.948423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.948496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.948510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.952695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.952741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.952755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.956966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.957014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.957028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.961178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.961227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.961241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.965325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.965374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.965387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.969526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.969575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.969588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.973731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.973789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.973803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.977852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.977900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.977912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.982248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.982297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.982310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.986536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.986585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.986599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.990715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.990773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.990786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.437 [2024-07-15 12:43:06.994879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.437 [2024-07-15 12:43:06.994927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.437 [2024-07-15 12:43:06.994940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:06.998949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:06.998998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:06.999011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.003173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.003222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.003236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.007464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.007514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.007527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.011744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.011802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.011817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.015961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.016009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.016022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.020252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.020287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.020300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.024538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.024577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.024591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.028968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.029003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.029016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.033345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.033398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.033411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.037710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.037771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.037784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.041976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.042024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.042038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.046195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.046229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.046243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.050347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.050380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.050394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.054557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.054607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.054620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.058730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.058788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.058802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.062924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.062973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.062987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.067030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.067080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.067093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.071176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.071211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.071224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.075345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.075394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.075407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.079564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.079612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.079625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.083849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.083896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.083909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.087965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.088014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.088027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.092112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.092159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.092171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.096218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.096267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.096280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.100436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.100494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.100511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.104658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.104692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.104706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.109354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.109394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.109409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.113773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.113813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.113828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.438 [2024-07-15 12:43:07.118295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.438 [2024-07-15 12:43:07.118352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.438 [2024-07-15 12:43:07.118367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.122545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.122601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.122616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.126840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.126894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.126908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.131009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.131061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.131074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.135408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.135454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.135476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.139898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.139942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.139957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.144311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.144351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.144366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.148685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.148722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.148750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.152978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.153014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.153028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.157132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.157182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.157196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.161397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.161448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.161462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.165717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.165778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.165792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.170057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.170123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.170136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.174332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.174367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.174381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.178694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.178743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.178759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.182952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.183002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.183016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.187161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.187230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.187246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.191564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.191615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.191630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.195896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.195931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.195944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.200087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.200133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.200147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.204471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.204506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.204520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.208711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.208758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.208772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.212927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.212963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.212976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.217247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.217284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.699 [2024-07-15 12:43:07.217298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.699 [2024-07-15 12:43:07.221589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.699 [2024-07-15 12:43:07.221625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.221639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.225885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.225920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.225934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.230139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.230175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.230188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.234405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.234440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.234453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.238573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.238607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.238621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.242699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.242745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.242760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.246821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.246854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.246868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.250907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.250940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.250953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.255107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.255142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.255156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.259459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.259495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.259508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.263685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.263720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.263747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.267795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.267829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.267842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.272142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.272192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.272206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.276487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.276521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.276534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.280675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.280709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.280723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.284881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.284915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.284928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.289017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.289052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.289065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.293176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.293227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.293241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.297408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.297459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.297473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.301730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.301794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.301808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.305926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.305960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.305973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.310146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.310180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.310193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.314285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.314320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.314333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.318519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.318553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.318567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.322788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.322823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.322836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.327279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.327320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.327335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.331682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.331737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.331764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.336018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.336069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.336083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.340329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.340380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.340394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.344581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.344615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.344628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.348681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.348715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.348741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.352913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.352947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.352960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.357010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.357044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.357058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.361219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.361269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.700 [2024-07-15 12:43:07.361283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.700 [2024-07-15 12:43:07.365373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.700 [2024-07-15 12:43:07.365423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.701 [2024-07-15 12:43:07.365437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.701 [2024-07-15 12:43:07.369639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.701 [2024-07-15 12:43:07.369688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.701 [2024-07-15 12:43:07.369701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.701 [2024-07-15 12:43:07.373787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.701 [2024-07-15 12:43:07.373834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.701 [2024-07-15 12:43:07.373847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.701 [2024-07-15 12:43:07.378049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.701 [2024-07-15 12:43:07.378103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.701 [2024-07-15 12:43:07.378118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.960 [2024-07-15 12:43:07.382816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.960 [2024-07-15 12:43:07.382869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.960 [2024-07-15 12:43:07.382883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.960 [2024-07-15 12:43:07.387228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.960 [2024-07-15 12:43:07.387268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.960 [2024-07-15 12:43:07.387284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.960 [2024-07-15 12:43:07.391466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.960 [2024-07-15 12:43:07.391518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.960 [2024-07-15 12:43:07.391532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.960 [2024-07-15 12:43:07.395812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.960 [2024-07-15 12:43:07.395865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.960 [2024-07-15 12:43:07.395879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.960 [2024-07-15 12:43:07.400274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b9eac0) 00:18:34.960 [2024-07-15 12:43:07.400315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.960 [2024-07-15 12:43:07.400330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.960 00:18:34.960 Latency(us) 00:18:34.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.960 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:34.960 nvme0n1 : 2.00 7248.42 906.05 0.00 0.00 2203.50 1861.82 9651.67 00:18:34.960 =================================================================================================================== 00:18:34.960 Total : 7248.42 906.05 0.00 0.00 2203.50 1861.82 9651.67 00:18:34.960 0 00:18:34.960 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:34.960 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:34.960 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:34.960 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:34.960 | .driver_specific 00:18:34.960 | .nvme_error 00:18:34.960 | .status_code 00:18:34.960 | .command_transient_transport_error' 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 468 > 0 )) 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80650 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80650 ']' 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80650 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80650 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:35.218 killing process with pid 80650 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80650' 00:18:35.218 Received shutdown signal, test time was about 2.000000 seconds 00:18:35.218 00:18:35.218 Latency(us) 00:18:35.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.218 =================================================================================================================== 00:18:35.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80650 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80650 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:35.218 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80712 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80712 /var/tmp/bperf.sock 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80712 ']' 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.477 12:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:35.477 [2024-07-15 12:43:07.943050] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:35.477 [2024-07-15 12:43:07.943132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80712 ] 00:18:35.477 [2024-07-15 12:43:08.075277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.735 [2024-07-15 12:43:08.190978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.735 [2024-07-15 12:43:08.244601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:36.302 12:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.302 12:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:36.302 12:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:36.302 12:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:36.561 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:36.561 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.561 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:36.561 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.561 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:36.561 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:37.126 nvme0n1 00:18:37.126 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:37.126 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.126 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:37.126 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.126 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:37.126 12:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:37.126 Running I/O for 2 seconds... 00:18:37.126 [2024-07-15 12:43:09.672383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fef90 00:18:37.126 [2024-07-15 12:43:09.674939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.126 [2024-07-15 12:43:09.674992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.126 [2024-07-15 12:43:09.688323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190feb58 00:18:37.126 [2024-07-15 12:43:09.690841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.126 [2024-07-15 12:43:09.690876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:37.126 [2024-07-15 12:43:09.704161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fe2e8 00:18:37.126 [2024-07-15 12:43:09.706638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.126 [2024-07-15 12:43:09.706670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:37.126 [2024-07-15 12:43:09.719967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fda78 00:18:37.126 [2024-07-15 12:43:09.722429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.126 [2024-07-15 12:43:09.722465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:37.126 [2024-07-15 12:43:09.735939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fd208 00:18:37.126 [2024-07-15 12:43:09.738371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.126 [2024-07-15 12:43:09.738408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:37.126 [2024-07-15 12:43:09.751774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fc998 00:18:37.126 [2024-07-15 12:43:09.754226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.126 [2024-07-15 12:43:09.754263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:37.126 [2024-07-15 12:43:09.767600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fc128 00:18:37.126 [2024-07-15 12:43:09.769993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.126 [2024-07-15 12:43:09.770045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:37.126 [2024-07-15 12:43:09.783399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fb8b8 00:18:37.126 [2024-07-15 12:43:09.785807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.126 [2024-07-15 12:43:09.785855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:37.126 [2024-07-15 12:43:09.799327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fb048 00:18:37.126 [2024-07-15 12:43:09.801703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.126 [2024-07-15 12:43:09.801759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.815327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fa7d8 00:18:37.384 [2024-07-15 12:43:09.817694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.817738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.831251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f9f68 00:18:37.384 [2024-07-15 12:43:09.833610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.833644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.847113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f96f8 00:18:37.384 [2024-07-15 12:43:09.849416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.849449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.863115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f8e88 00:18:37.384 [2024-07-15 12:43:09.865427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.865458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.878971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f8618 00:18:37.384 [2024-07-15 12:43:09.881239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.881272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.894801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f7da8 00:18:37.384 [2024-07-15 12:43:09.897046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.897081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.910579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f7538 00:18:37.384 [2024-07-15 12:43:09.912814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.912846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.926348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f6cc8 00:18:37.384 [2024-07-15 12:43:09.928549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.928581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.942137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f6458 00:18:37.384 [2024-07-15 12:43:09.944306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.944338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.957950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f5be8 00:18:37.384 [2024-07-15 12:43:09.960106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.960139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.973712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f5378 00:18:37.384 [2024-07-15 12:43:09.975852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.975883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:09.989581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f4b08 00:18:37.384 [2024-07-15 12:43:09.991697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:09.991750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:10.005488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f4298 00:18:37.384 [2024-07-15 12:43:10.007584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:10.007623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:10.021302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f3a28 00:18:37.384 [2024-07-15 12:43:10.023383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:10.023415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:10.037105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f31b8 00:18:37.384 [2024-07-15 12:43:10.039167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:10.039199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:37.384 [2024-07-15 12:43:10.052873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f2948 00:18:37.384 [2024-07-15 12:43:10.054911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.384 [2024-07-15 12:43:10.054941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:37.642 [2024-07-15 12:43:10.068741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f20d8 00:18:37.642 [2024-07-15 12:43:10.070769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.642 [2024-07-15 12:43:10.070808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:37.642 [2024-07-15 12:43:10.084687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f1868 00:18:37.642 [2024-07-15 12:43:10.086692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.642 [2024-07-15 12:43:10.086738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:37.642 [2024-07-15 12:43:10.100528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f0ff8 00:18:37.642 [2024-07-15 12:43:10.102518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.642 [2024-07-15 12:43:10.102551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:37.642 [2024-07-15 12:43:10.116428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f0788 00:18:37.643 [2024-07-15 12:43:10.118426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.118458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.132311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190eff18 00:18:37.643 [2024-07-15 12:43:10.134268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.134301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.148123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ef6a8 00:18:37.643 [2024-07-15 12:43:10.150066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.150101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.164005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190eee38 00:18:37.643 [2024-07-15 12:43:10.165925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.165959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.179888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ee5c8 00:18:37.643 [2024-07-15 12:43:10.181787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.181821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.195660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190edd58 00:18:37.643 [2024-07-15 12:43:10.197540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.197572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.211555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ed4e8 00:18:37.643 [2024-07-15 12:43:10.213450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.213499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.227769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ecc78 00:18:37.643 [2024-07-15 12:43:10.229610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.229643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.243814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ec408 00:18:37.643 [2024-07-15 12:43:10.245669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.245722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.259919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ebb98 00:18:37.643 [2024-07-15 12:43:10.261713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.261776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.275785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190eb328 00:18:37.643 [2024-07-15 12:43:10.277577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.277625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.291756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190eaab8 00:18:37.643 [2024-07-15 12:43:10.293501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.293539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.307522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ea248 00:18:37.643 [2024-07-15 12:43:10.309281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.309314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:37.643 [2024-07-15 12:43:10.323366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e99d8 00:18:37.643 [2024-07-15 12:43:10.325125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.643 [2024-07-15 12:43:10.325160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.339272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e9168 00:18:37.901 [2024-07-15 12:43:10.341021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.341061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.354960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e88f8 00:18:37.901 [2024-07-15 12:43:10.356642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.356676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.370704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e8088 00:18:37.901 [2024-07-15 12:43:10.372353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.372402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.386638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e7818 00:18:37.901 [2024-07-15 12:43:10.388275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.388309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.402524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e6fa8 00:18:37.901 [2024-07-15 12:43:10.404148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.404181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.418343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e6738 00:18:37.901 [2024-07-15 12:43:10.419938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.419979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.434089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e5ec8 00:18:37.901 [2024-07-15 12:43:10.435656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.435703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.449757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e5658 00:18:37.901 [2024-07-15 12:43:10.451304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.451352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.465486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e4de8 00:18:37.901 [2024-07-15 12:43:10.467073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.467105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.481240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e4578 00:18:37.901 [2024-07-15 12:43:10.482758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.482805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.497154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e3d08 00:18:37.901 [2024-07-15 12:43:10.498634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.498687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.513091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e3498 00:18:37.901 [2024-07-15 12:43:10.514617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.514652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.529033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e2c28 00:18:37.901 [2024-07-15 12:43:10.530484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.530556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.544850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e23b8 00:18:37.901 [2024-07-15 12:43:10.546291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.546339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.560635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e1b48 00:18:37.901 [2024-07-15 12:43:10.562076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.562112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:37.901 [2024-07-15 12:43:10.576568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e12d8 00:18:37.901 [2024-07-15 12:43:10.578003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.901 [2024-07-15 12:43:10.578038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.592595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e0a68 00:18:38.160 [2024-07-15 12:43:10.593999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.594035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.608316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e01f8 00:18:38.160 [2024-07-15 12:43:10.609681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.609730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.624122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190df988 00:18:38.160 [2024-07-15 12:43:10.625494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.625543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.640088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190df118 00:18:38.160 [2024-07-15 12:43:10.641428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.641477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.655938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190de8a8 00:18:38.160 [2024-07-15 12:43:10.657261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.657293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.671653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190de038 00:18:38.160 [2024-07-15 12:43:10.672973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.673006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.693852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190de038 00:18:38.160 [2024-07-15 12:43:10.696377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.696412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.709810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190de8a8 00:18:38.160 [2024-07-15 12:43:10.712341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.712388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.725870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190df118 00:18:38.160 [2024-07-15 12:43:10.728304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.728354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.741728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190df988 00:18:38.160 [2024-07-15 12:43:10.744175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.744224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.757658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e01f8 00:18:38.160 [2024-07-15 12:43:10.760087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.760124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.773517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e0a68 00:18:38.160 [2024-07-15 12:43:10.775926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.775959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.789339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e12d8 00:18:38.160 [2024-07-15 12:43:10.791711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.791753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.805145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e1b48 00:18:38.160 [2024-07-15 12:43:10.807496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.160 [2024-07-15 12:43:10.807528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:38.160 [2024-07-15 12:43:10.820975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e23b8 00:18:38.160 [2024-07-15 12:43:10.823311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.161 [2024-07-15 12:43:10.823343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:38.161 [2024-07-15 12:43:10.836792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e2c28 00:18:38.161 [2024-07-15 12:43:10.839108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.161 [2024-07-15 12:43:10.839141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.852695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e3498 00:18:38.419 [2024-07-15 12:43:10.855005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.855041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.868529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e3d08 00:18:38.419 [2024-07-15 12:43:10.870821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.870854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.884339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e4578 00:18:38.419 [2024-07-15 12:43:10.886611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.886643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.900146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e4de8 00:18:38.419 [2024-07-15 12:43:10.902420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.902453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.916116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e5658 00:18:38.419 [2024-07-15 12:43:10.918358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.918395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.931926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e5ec8 00:18:38.419 [2024-07-15 12:43:10.934201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.934238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.947846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e6738 00:18:38.419 [2024-07-15 12:43:10.950044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.950080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.963763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e6fa8 00:18:38.419 [2024-07-15 12:43:10.965919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.965974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.979584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e7818 00:18:38.419 [2024-07-15 12:43:10.981734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.981778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:10.995499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e8088 00:18:38.419 [2024-07-15 12:43:10.997700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:10.997748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:38.419 [2024-07-15 12:43:11.011359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e88f8 00:18:38.419 [2024-07-15 12:43:11.013560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.419 [2024-07-15 12:43:11.013596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:38.420 [2024-07-15 12:43:11.027266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e9168 00:18:38.420 [2024-07-15 12:43:11.029383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.420 [2024-07-15 12:43:11.029420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:38.420 [2024-07-15 12:43:11.043066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190e99d8 00:18:38.420 [2024-07-15 12:43:11.045148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.420 [2024-07-15 12:43:11.045186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:38.420 [2024-07-15 12:43:11.058853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ea248 00:18:38.420 [2024-07-15 12:43:11.060947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.420 [2024-07-15 12:43:11.060990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:38.420 [2024-07-15 12:43:11.074806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190eaab8 00:18:38.420 [2024-07-15 12:43:11.076899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.420 [2024-07-15 12:43:11.076948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:38.420 [2024-07-15 12:43:11.090611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190eb328 00:18:38.420 [2024-07-15 12:43:11.092661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.420 [2024-07-15 12:43:11.092699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.106725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ebb98 00:18:38.678 [2024-07-15 12:43:11.108770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.108810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.122625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ec408 00:18:38.678 [2024-07-15 12:43:11.124662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.124702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.138629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ecc78 00:18:38.678 [2024-07-15 12:43:11.140675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.140716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.154473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ed4e8 00:18:38.678 [2024-07-15 12:43:11.156487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.156525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.170408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190edd58 00:18:38.678 [2024-07-15 12:43:11.172420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.172463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.186317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ee5c8 00:18:38.678 [2024-07-15 12:43:11.188236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.188272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.202156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190eee38 00:18:38.678 [2024-07-15 12:43:11.204046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.204081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.218077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190ef6a8 00:18:38.678 [2024-07-15 12:43:11.219921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.219956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.233855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190eff18 00:18:38.678 [2024-07-15 12:43:11.235677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.235713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.249764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f0788 00:18:38.678 [2024-07-15 12:43:11.251568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.251606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.265555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f0ff8 00:18:38.678 [2024-07-15 12:43:11.267369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.267405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.281430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f1868 00:18:38.678 [2024-07-15 12:43:11.283259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.283300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.297335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f20d8 00:18:38.678 [2024-07-15 12:43:11.299141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.299174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.313233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f2948 00:18:38.678 [2024-07-15 12:43:11.315001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.315037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.329083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f31b8 00:18:38.678 [2024-07-15 12:43:11.330845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.330881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.344871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f3a28 00:18:38.678 [2024-07-15 12:43:11.346595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.678 [2024-07-15 12:43:11.346631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:38.678 [2024-07-15 12:43:11.360827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f4298 00:18:38.936 [2024-07-15 12:43:11.362507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.936 [2024-07-15 12:43:11.362549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:38.936 [2024-07-15 12:43:11.376902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f4b08 00:18:38.936 [2024-07-15 12:43:11.378562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.936 [2024-07-15 12:43:11.378602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:38.936 [2024-07-15 12:43:11.392802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f5378 00:18:38.936 [2024-07-15 12:43:11.394429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.394467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.408579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f5be8 00:18:38.937 [2024-07-15 12:43:11.410254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.410290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.424268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f6458 00:18:38.937 [2024-07-15 12:43:11.425905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.425943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.440375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f6cc8 00:18:38.937 [2024-07-15 12:43:11.441973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.442013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.456241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f7538 00:18:38.937 [2024-07-15 12:43:11.457829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.457866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.472115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f7da8 00:18:38.937 [2024-07-15 12:43:11.473672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.473709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.487926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f8618 00:18:38.937 [2024-07-15 12:43:11.489467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.489503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.503744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f8e88 00:18:38.937 [2024-07-15 12:43:11.505262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.505297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.519543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f96f8 00:18:38.937 [2024-07-15 12:43:11.521052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.521086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.535371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190f9f68 00:18:38.937 [2024-07-15 12:43:11.536896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.536931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.551295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fa7d8 00:18:38.937 [2024-07-15 12:43:11.552764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.552798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.567311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fb048 00:18:38.937 [2024-07-15 12:43:11.568794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.568833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.583232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fb8b8 00:18:38.937 [2024-07-15 12:43:11.584655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.584692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.599078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fc128 00:18:38.937 [2024-07-15 12:43:11.600492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.600529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:38.937 [2024-07-15 12:43:11.614951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fc998 00:18:38.937 [2024-07-15 12:43:11.616345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.937 [2024-07-15 12:43:11.616398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:39.201 [2024-07-15 12:43:11.631100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fd208 00:18:39.201 [2024-07-15 12:43:11.632487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.201 [2024-07-15 12:43:11.632531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:39.201 [2024-07-15 12:43:11.647076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835360) with pdu=0x2000190fda78 00:18:39.201 [2024-07-15 12:43:11.648425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.201 [2024-07-15 12:43:11.648478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:39.201 00:18:39.201 Latency(us) 00:18:39.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.201 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.201 nvme0n1 : 2.00 15907.09 62.14 0.00 0.00 8038.58 7536.64 30384.87 00:18:39.201 =================================================================================================================== 00:18:39.201 Total : 15907.09 62.14 0.00 0.00 8038.58 7536.64 30384.87 00:18:39.201 0 00:18:39.201 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:39.201 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:39.201 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:39.201 | .driver_specific 00:18:39.201 | .nvme_error 00:18:39.201 | .status_code 00:18:39.201 | .command_transient_transport_error' 00:18:39.201 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 125 > 0 )) 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80712 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80712 ']' 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80712 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80712 00:18:39.465 killing process with pid 80712 00:18:39.465 Received shutdown signal, test time was about 2.000000 seconds 00:18:39.465 00:18:39.465 Latency(us) 00:18:39.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.465 =================================================================================================================== 00:18:39.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80712' 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80712 00:18:39.465 12:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80712 00:18:39.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80772 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80772 /var/tmp/bperf.sock 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80772 ']' 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.724 12:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:39.724 [2024-07-15 12:43:12.254634] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:39.724 [2024-07-15 12:43:12.255004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80772 ] 00:18:39.724 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:39.724 Zero copy mechanism will not be used. 00:18:39.724 [2024-07-15 12:43:12.390335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.982 [2024-07-15 12:43:12.506343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.982 [2024-07-15 12:43:12.560708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:40.548 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.548 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:40.548 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:40.548 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:40.806 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:40.806 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.806 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:40.806 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.806 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:40.806 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:41.062 nvme0n1 00:18:41.320 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:41.320 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.320 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:41.320 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.320 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:41.320 12:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:41.320 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:41.320 Zero copy mechanism will not be used. 00:18:41.320 Running I/O for 2 seconds... 00:18:41.320 [2024-07-15 12:43:13.867025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.867715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.868018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.873150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.873504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.873603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.878800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.879086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.879127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.884243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.884613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.884721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.889887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.890229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.890309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.895729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.896297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.896591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.901939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.902408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.902542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.907590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.908096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.908291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.913640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.914128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.914372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.919625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.919945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.919975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.320 [2024-07-15 12:43:13.925021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.320 [2024-07-15 12:43:13.925318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.320 [2024-07-15 12:43:13.925348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.930382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.930681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.930715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.935894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.936191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.936219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.941439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.941759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.941797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.947066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.947367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.947395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.952508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.952822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.952854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.957960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.958261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.958289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.963326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.963636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.963664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.968657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.968964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.968996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.974121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.974438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.974466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.979477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.979795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.979823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.984784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.985080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.985108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.990028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.990348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.990375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:13.995363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:13.995663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:13.995691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.321 [2024-07-15 12:43:14.000880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.321 [2024-07-15 12:43:14.001220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.321 [2024-07-15 12:43:14.001251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.579 [2024-07-15 12:43:14.006356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.579 [2024-07-15 12:43:14.006664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.579 [2024-07-15 12:43:14.006705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.579 [2024-07-15 12:43:14.011781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.579 [2024-07-15 12:43:14.012092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.579 [2024-07-15 12:43:14.012122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.579 [2024-07-15 12:43:14.017101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.579 [2024-07-15 12:43:14.017396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.579 [2024-07-15 12:43:14.017426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.579 [2024-07-15 12:43:14.022390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.579 [2024-07-15 12:43:14.022691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.022719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.027754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.028092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.028127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.033346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.033639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.033669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.038719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.039063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.039092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.044146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.044476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.044505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.049550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.049890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.049919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.055053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.055369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.055399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.060501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.060816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.060845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.065857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.066156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.066184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.071209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.071521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.071550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.076570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.076879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.076908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.082021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.082320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.082349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.087368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.087666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.087696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.092713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.093028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.093061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.098101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.098399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.098428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.103470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.103811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.103839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.108804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.109102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.109130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.114202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.114498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.114526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.119408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.119726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.119749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.124830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.125151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.125178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.130192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.130489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.130517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.135497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.135822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.135850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.140922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.141225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.141251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.146246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.146556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.146585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.151600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.151913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.151944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.156955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.157254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.157284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.162350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.162655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.162683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.167823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.168125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.168152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.173187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.173498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.580 [2024-07-15 12:43:14.173541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.580 [2024-07-15 12:43:14.178641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.580 [2024-07-15 12:43:14.178956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.178988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.183967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.184278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.184306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.189310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.189623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.189651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.194664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.195021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.195049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.199999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.200300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.200328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.205354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.205663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.205692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.210627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.210973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.211005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.216105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.216406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.216436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.221402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.221705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.221757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.226658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.226996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.227025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.231938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.232244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.232271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.237199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.237495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.237524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.242568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.242919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.242952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.247874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.248186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.248214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.253265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.253567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.253594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.581 [2024-07-15 12:43:14.258576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.581 [2024-07-15 12:43:14.258968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.581 [2024-07-15 12:43:14.258996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.264167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.264494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.264525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.269727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.270072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.270112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.275114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.275414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.275444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.280520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.280829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.280861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.285894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.286209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.286240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.291200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.291517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.291546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.296552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.296859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.296888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.301847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.302163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.302191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.307252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.307583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.307614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.312544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.312858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.312894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.317867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.318164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.318192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.323155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.323450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.323480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.328400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.328749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.328778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.333738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.334089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.334128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.339075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.339402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.339431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.344443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.344781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.344809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.349898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.350220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.350247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.355207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.355531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.355560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.360409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.360753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.360781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.365956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.840 [2024-07-15 12:43:14.366254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.840 [2024-07-15 12:43:14.366282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.840 [2024-07-15 12:43:14.371372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.371688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.371716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.376842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.377142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.377180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.382145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.382442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.382471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.387484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.387798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.387826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.392882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.393179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.393208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.398190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.398490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.398519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.403531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.403876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.403904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.408923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.409233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.409267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.414298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.414593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.414630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.419662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.419989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.420022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.425014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.425309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.425340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.430394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.430690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.430718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.435734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.436056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.436084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.441199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.441494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.441523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.446619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.446932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.446959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.451924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.452219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.452247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.457190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.457484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.457512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.462473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.462797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.462825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.467888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.468198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.468227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.473208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.473503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.473531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.478585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.478916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.478948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.483956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.484253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.484281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.489278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.489589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.489618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.494602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.494930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.494957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.499941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.500239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.500267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.505252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.505557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.505585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.510537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.510852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.510885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.515910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.516210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.516238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:41.841 [2024-07-15 12:43:14.521177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:41.841 [2024-07-15 12:43:14.521480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.841 [2024-07-15 12:43:14.521513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.526475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.526788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.526819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.531813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.532114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.532148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.537173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.537482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.537512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.542405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.542704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.542745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.547812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.548103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.548132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.553080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.553377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.553405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.558399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.558708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.558748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.563784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.564085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.564116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.568841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.568916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.568942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.574190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.100 [2024-07-15 12:43:14.574275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.100 [2024-07-15 12:43:14.574299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.100 [2024-07-15 12:43:14.579633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.579729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.579752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.584889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.584992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.585015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.590209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.590283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.590306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.595462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.595580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.595602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.600764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.600834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.600856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.606169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.606240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.606262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.611507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.611595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.611618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.616782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.616855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.616879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.622105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.622174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.622196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.627365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.627454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.627476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.632526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.632598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.632620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.637887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.637956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.637979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.643128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.643202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.643224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.648379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.648464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.648487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.653911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.653999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.654021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.659420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.659496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.659522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.664792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.664882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.664907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.670223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.670313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.670336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.675546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.675648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.675671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.680872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.680947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.680970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.686092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.686162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.686185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.691350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.691434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.691457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.696634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.696709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.696731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.701937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.702007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.702030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.707389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.707462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.707485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.712597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.712677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.712699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.718032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.718121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.718144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.723451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.723541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.723563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.728827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.728929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.101 [2024-07-15 12:43:14.728951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.101 [2024-07-15 12:43:14.734100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.101 [2024-07-15 12:43:14.734203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.734225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.102 [2024-07-15 12:43:14.739394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.102 [2024-07-15 12:43:14.739479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.739501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.102 [2024-07-15 12:43:14.744594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.102 [2024-07-15 12:43:14.744663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.744686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.102 [2024-07-15 12:43:14.749932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.102 [2024-07-15 12:43:14.750020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.750042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.102 [2024-07-15 12:43:14.755232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.102 [2024-07-15 12:43:14.755316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.755338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.102 [2024-07-15 12:43:14.760501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.102 [2024-07-15 12:43:14.760573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.760595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.102 [2024-07-15 12:43:14.765785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.102 [2024-07-15 12:43:14.765891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.765913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.102 [2024-07-15 12:43:14.771169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.102 [2024-07-15 12:43:14.771255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.771277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.102 [2024-07-15 12:43:14.776515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.102 [2024-07-15 12:43:14.776585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.776607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.102 [2024-07-15 12:43:14.782042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.102 [2024-07-15 12:43:14.782139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.102 [2024-07-15 12:43:14.782164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.787447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.787520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.787545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.792763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.792836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.792862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.797888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.797974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.797997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.803114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.803213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.803235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.808333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.808418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.808440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.813553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.813653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.813675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.818967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.819068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.819092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.824464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.824539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.824564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.829688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.829809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.829834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.834902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.834995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.835017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.840095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.840193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.840216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.845373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.845474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.361 [2024-07-15 12:43:14.845496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.361 [2024-07-15 12:43:14.850650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.361 [2024-07-15 12:43:14.850736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.850790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.855907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.855990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.856012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.861195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.861297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.861319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.866529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.866618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.866641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.871813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.871910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.871932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.877202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.877288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.877309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.882547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.882639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.882661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.887905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.887992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.888014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.893162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.893257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.893279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.898493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.898580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.898603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.903617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.903705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.903727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.908928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.909030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.909053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.914252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.914349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.914370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.919543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.919626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.919648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.924792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.924877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.924899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.930006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.930106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.930128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.935376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.935464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.935489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.940549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.940634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.940659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.945818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.945889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.945912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.951044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.951129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.951151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.956257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.956335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.956358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.961587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.961695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.961716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.967065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.967196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.967218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.972417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.972520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.972542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.977930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.978018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.978041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.983301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.983385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.983407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.988635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.988716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.988740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.994118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.994188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.994211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:14.999421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:14.999539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.362 [2024-07-15 12:43:14.999561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.362 [2024-07-15 12:43:15.004766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.362 [2024-07-15 12:43:15.004853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.363 [2024-07-15 12:43:15.004876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.363 [2024-07-15 12:43:15.010020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.363 [2024-07-15 12:43:15.010108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.363 [2024-07-15 12:43:15.010131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.363 [2024-07-15 12:43:15.015232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.363 [2024-07-15 12:43:15.015328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.363 [2024-07-15 12:43:15.015350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.363 [2024-07-15 12:43:15.020528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.363 [2024-07-15 12:43:15.020627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.363 [2024-07-15 12:43:15.020649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.363 [2024-07-15 12:43:15.025706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.363 [2024-07-15 12:43:15.025804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.363 [2024-07-15 12:43:15.025827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.363 [2024-07-15 12:43:15.030961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.363 [2024-07-15 12:43:15.031044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.363 [2024-07-15 12:43:15.031065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.363 [2024-07-15 12:43:15.036140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.363 [2024-07-15 12:43:15.036234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.363 [2024-07-15 12:43:15.036255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.363 [2024-07-15 12:43:15.041369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.363 [2024-07-15 12:43:15.041455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.363 [2024-07-15 12:43:15.041480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.623 [2024-07-15 12:43:15.046654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.623 [2024-07-15 12:43:15.046740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.623 [2024-07-15 12:43:15.046799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.623 [2024-07-15 12:43:15.052000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.623 [2024-07-15 12:43:15.052088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.623 [2024-07-15 12:43:15.052113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.623 [2024-07-15 12:43:15.057230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.623 [2024-07-15 12:43:15.057321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.623 [2024-07-15 12:43:15.057345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.623 [2024-07-15 12:43:15.062558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.623 [2024-07-15 12:43:15.062643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.623 [2024-07-15 12:43:15.062666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.623 [2024-07-15 12:43:15.067895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.623 [2024-07-15 12:43:15.067984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.623 [2024-07-15 12:43:15.068007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.623 [2024-07-15 12:43:15.073094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.623 [2024-07-15 12:43:15.073187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.623 [2024-07-15 12:43:15.073209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.623 [2024-07-15 12:43:15.078706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.623 [2024-07-15 12:43:15.078818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.623 [2024-07-15 12:43:15.078844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.083949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.084053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.084079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.089243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.089327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.089350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.094592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.094677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.094701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.099972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.100059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.100082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.105188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.105272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.105296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.110554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.110638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.110660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.115976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.116059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.116081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.121481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.121578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.121600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.126893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.126983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.127006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.132194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.132278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.132300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.137579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.137660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.137681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.142944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.143049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.143071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.148229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.148333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.148356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.153682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.153799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.153822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.158944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.159028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.159051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.164220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.164305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.164326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.169666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.169787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.169810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.174896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.174983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.175005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.180170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.180241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.180265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.624 [2024-07-15 12:43:15.185436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.624 [2024-07-15 12:43:15.185525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.624 [2024-07-15 12:43:15.185548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.190751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.190841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.190864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.196101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.196176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.196202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.201408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.201485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.201511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.206798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.206888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.206911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.212104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.212178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.212202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.217328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.217403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.217426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.222563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.222633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.222656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.227844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.227914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.227936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.233087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.233159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.233181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.238316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.238389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.238412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.243574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.243648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.243671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.248818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.248890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.248913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.254016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.254093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.254115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.259267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.259341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.259364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.264544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.264626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.264649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.269914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.269989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.270011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.275094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.275169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.275192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.280387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.280467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.280490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.285642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.285715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.285751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.625 [2024-07-15 12:43:15.290915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.625 [2024-07-15 12:43:15.290989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.625 [2024-07-15 12:43:15.291012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.626 [2024-07-15 12:43:15.296159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.626 [2024-07-15 12:43:15.296231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.626 [2024-07-15 12:43:15.296254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.626 [2024-07-15 12:43:15.301439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.626 [2024-07-15 12:43:15.301512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.626 [2024-07-15 12:43:15.301537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.306659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.306757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.306782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.311836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.311913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.311945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.317049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.317134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.317157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.322315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.322385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.322409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.327536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.327607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.327630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.332849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.332929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.332952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.338126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.338199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.338226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.343379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.343463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.343487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.348670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.348766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.348796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.353884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.353956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.353979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.359088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.359206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.359228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.364295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.364372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.364394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.369618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.369690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.369713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.374902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.374986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.375009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.380091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.380179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.380203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.385345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.385432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.385454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.390499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.390570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.390593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.395749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.395834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.395857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.401003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.401095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.401117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.406318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.406415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.406437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.411632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.411704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.411740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.416937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.417031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.417053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.422191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.422262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.422285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.427430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.427524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.427547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.432714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.432825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.432847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.885 [2024-07-15 12:43:15.437931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.885 [2024-07-15 12:43:15.438006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.885 [2024-07-15 12:43:15.438028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.443151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.443225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.443247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.448373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.448489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.448512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.453697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.453793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.453819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.459023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.459107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.459133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.464276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.464359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.464382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.469554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.469637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.469660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.474847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.474929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.474951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.480175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.480255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.480278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.485431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.485511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.485533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.490669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.490763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.490787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.495987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.496059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.496081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.501213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.501286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.501309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.506398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.506479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.506501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.511691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.511780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.511804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.516998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.517072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.517094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.522291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.522367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.522390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.527451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.527532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.527554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.532790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.532862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.532885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.538054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.538125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.538148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.543329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.543400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.543423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.548568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.548662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.548685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.553867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.553962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.553985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.559109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.559182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.559212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.886 [2024-07-15 12:43:15.564404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:42.886 [2024-07-15 12:43:15.564501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.886 [2024-07-15 12:43:15.564527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.145 [2024-07-15 12:43:15.569776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.145 [2024-07-15 12:43:15.569862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.145 [2024-07-15 12:43:15.569888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.145 [2024-07-15 12:43:15.575136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.145 [2024-07-15 12:43:15.575211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.575237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.580506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.580589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.580619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.585786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.585889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.585913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.591100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.591175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.591199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.596413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.596508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.596533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.601786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.601893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.601918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.607028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.607123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.607147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.612345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.612441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.612482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.617648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.617744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.617773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.622912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.622985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.623008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.628203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.628282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.628305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.633546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.633641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.633664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.638760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.638854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.638876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.643995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.644088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.644111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.649224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.649318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.649341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.655053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.655130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.655153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.660278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.660360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.660383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.665567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.665652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.665674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.670858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.670957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.670980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.676057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.676129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.676151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.681316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.681418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.681440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.686606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.686700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.686723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.691901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.691993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.692015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.697167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.697261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.697283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.702387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.702466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.702488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.707608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.707710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.707752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.712813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.712884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.712909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.718053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.718159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.718184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.723353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.723446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.723469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.146 [2024-07-15 12:43:15.728550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.146 [2024-07-15 12:43:15.728636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.146 [2024-07-15 12:43:15.728658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.733891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.733975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.733998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.739136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.739219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.739242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.744407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.744502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.744525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.749680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.749768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.749791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.754987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.755080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.755102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.760203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.760286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.760308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.765505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.765588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.765610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.770694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.770796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.770819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.776059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.776144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.776166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.781338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.781421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.781444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.786594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.786679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.786700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.791973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.792058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.792080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.797228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.797310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.797332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.802531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.802617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.802639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.807832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.807904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.807926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.813045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.813131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.813153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.818451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.818545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.818571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.147 [2024-07-15 12:43:15.823877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.147 [2024-07-15 12:43:15.823981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.147 [2024-07-15 12:43:15.824006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.405 [2024-07-15 12:43:15.829139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.405 [2024-07-15 12:43:15.829224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.405 [2024-07-15 12:43:15.829248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.405 [2024-07-15 12:43:15.834369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.405 [2024-07-15 12:43:15.834446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.405 [2024-07-15 12:43:15.834471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.405 [2024-07-15 12:43:15.839549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.405 [2024-07-15 12:43:15.839631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.405 [2024-07-15 12:43:15.839654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.405 [2024-07-15 12:43:15.844897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.405 [2024-07-15 12:43:15.844986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.405 [2024-07-15 12:43:15.845013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.405 [2024-07-15 12:43:15.850172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.405 [2024-07-15 12:43:15.850262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.405 [2024-07-15 12:43:15.850287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.405 [2024-07-15 12:43:15.855397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.405 [2024-07-15 12:43:15.855479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.405 [2024-07-15 12:43:15.855503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.405 [2024-07-15 12:43:15.860705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x835500) with pdu=0x2000190fef90 00:18:43.405 [2024-07-15 12:43:15.860809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.406 [2024-07-15 12:43:15.860833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.406 00:18:43.406 Latency(us) 00:18:43.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.406 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:43.406 nvme0n1 : 2.00 5812.36 726.54 0.00 0.00 2746.63 1623.51 6285.50 00:18:43.406 =================================================================================================================== 00:18:43.406 Total : 5812.36 726.54 0.00 0.00 2746.63 1623.51 6285.50 00:18:43.406 0 00:18:43.406 12:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:43.406 12:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:43.406 12:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:43.406 12:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:43.406 | .driver_specific 00:18:43.406 | .nvme_error 00:18:43.406 | .status_code 00:18:43.406 | .command_transient_transport_error' 00:18:43.663 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 375 > 0 )) 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80772 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80772 ']' 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80772 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80772 00:18:43.664 killing process with pid 80772 00:18:43.664 Received shutdown signal, test time was about 2.000000 seconds 00:18:43.664 00:18:43.664 Latency(us) 00:18:43.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.664 =================================================================================================================== 00:18:43.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80772' 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80772 00:18:43.664 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80772 00:18:43.922 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80559 00:18:43.922 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80559 ']' 00:18:43.922 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80559 00:18:43.922 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:43.923 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.923 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80559 00:18:43.923 killing process with pid 80559 00:18:43.923 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:43.923 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:43.923 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80559' 00:18:43.923 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80559 00:18:43.923 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80559 00:18:44.182 ************************************ 00:18:44.182 END TEST nvmf_digest_error 00:18:44.182 ************************************ 00:18:44.182 00:18:44.182 real 0m18.485s 00:18:44.182 user 0m35.445s 00:18:44.182 sys 0m4.993s 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.182 rmmod nvme_tcp 00:18:44.182 rmmod nvme_fabrics 00:18:44.182 rmmod nvme_keyring 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80559 ']' 00:18:44.182 Process with pid 80559 is not found 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80559 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80559 ']' 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80559 00:18:44.182 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80559) - No such process 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80559 is not found' 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:44.182 00:18:44.182 real 0m38.184s 00:18:44.182 user 1m12.166s 00:18:44.182 sys 0m10.217s 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:44.182 12:43:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:44.182 ************************************ 00:18:44.182 END TEST nvmf_digest 00:18:44.182 ************************************ 00:18:44.441 12:43:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:44.441 12:43:16 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:18:44.441 12:43:16 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:18:44.441 12:43:16 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:44.441 12:43:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:44.441 12:43:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.441 12:43:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:44.441 ************************************ 00:18:44.441 START TEST nvmf_host_multipath 00:18:44.441 ************************************ 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:44.441 * Looking for test storage... 00:18:44.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.441 12:43:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:44.441 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:44.442 Cannot find device "nvmf_tgt_br" 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.442 Cannot find device "nvmf_tgt_br2" 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:44.442 Cannot find device "nvmf_tgt_br" 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:44.442 Cannot find device "nvmf_tgt_br2" 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:44.442 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:44.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:18:44.701 00:18:44.701 --- 10.0.0.2 ping statistics --- 00:18:44.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.701 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:44.701 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:44.701 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:44.701 00:18:44.701 --- 10.0.0.3 ping statistics --- 00:18:44.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.701 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:44.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:44.701 00:18:44.701 --- 10.0.0.1 ping statistics --- 00:18:44.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.701 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=81042 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 81042 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81042 ']' 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.701 12:43:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:44.960 [2024-07-15 12:43:17.421364] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:44.960 [2024-07-15 12:43:17.421461] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.960 [2024-07-15 12:43:17.555630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:45.219 [2024-07-15 12:43:17.676172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.219 [2024-07-15 12:43:17.676231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.219 [2024-07-15 12:43:17.676245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.219 [2024-07-15 12:43:17.676255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.219 [2024-07-15 12:43:17.676263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.219 [2024-07-15 12:43:17.676422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.219 [2024-07-15 12:43:17.676517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.219 [2024-07-15 12:43:17.733186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:45.787 12:43:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.787 12:43:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:45.787 12:43:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.787 12:43:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.787 12:43:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:45.787 12:43:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.787 12:43:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81042 00:18:45.787 12:43:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:46.045 [2024-07-15 12:43:18.637397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.045 12:43:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:46.304 Malloc0 00:18:46.304 12:43:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:46.562 12:43:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.820 12:43:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.138 [2024-07-15 12:43:19.666064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.138 12:43:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:47.396 [2024-07-15 12:43:19.898254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:47.396 12:43:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:47.396 12:43:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81092 00:18:47.396 12:43:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.396 12:43:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81092 /var/tmp/bdevperf.sock 00:18:47.396 12:43:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81092 ']' 00:18:47.396 12:43:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.396 12:43:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.396 12:43:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.397 12:43:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.397 12:43:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:48.331 12:43:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.331 12:43:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:48.331 12:43:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:48.589 12:43:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:48.847 Nvme0n1 00:18:49.105 12:43:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:49.363 Nvme0n1 00:18:49.363 12:43:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:49.363 12:43:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:50.298 12:43:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:50.298 12:43:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:50.557 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:50.813 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:50.813 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:50.813 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81143 00:18:50.813 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.377 Attaching 4 probes... 00:18:57.377 @path[10.0.0.2, 4421]: 17226 00:18:57.377 @path[10.0.0.2, 4421]: 17787 00:18:57.377 @path[10.0.0.2, 4421]: 17808 00:18:57.377 @path[10.0.0.2, 4421]: 17925 00:18:57.377 @path[10.0.0.2, 4421]: 17786 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81143 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:57.377 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:57.668 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:57.668 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81254 00:18:57.668 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:57.668 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.260 Attaching 4 probes... 00:19:04.260 @path[10.0.0.2, 4420]: 17476 00:19:04.260 @path[10.0.0.2, 4420]: 17750 00:19:04.260 @path[10.0.0.2, 4420]: 17754 00:19:04.260 @path[10.0.0.2, 4420]: 17640 00:19:04.260 @path[10.0.0.2, 4420]: 17734 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81254 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:04.260 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:04.518 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81368 00:19:04.518 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:04.518 12:43:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:11.133 12:43:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:11.133 12:43:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:11.133 Attaching 4 probes... 00:19:11.133 @path[10.0.0.2, 4421]: 13970 00:19:11.133 @path[10.0.0.2, 4421]: 17562 00:19:11.133 @path[10.0.0.2, 4421]: 17475 00:19:11.133 @path[10.0.0.2, 4421]: 17648 00:19:11.133 @path[10.0.0.2, 4421]: 17633 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81368 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:11.133 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:11.412 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:11.412 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:11.413 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81486 00:19:11.413 12:43:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:17.971 12:43:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:17.971 12:43:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:17.971 Attaching 4 probes... 00:19:17.971 00:19:17.971 00:19:17.971 00:19:17.971 00:19:17.971 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81486 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81593 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:17.971 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:24.534 Attaching 4 probes... 00:19:24.534 @path[10.0.0.2, 4421]: 17229 00:19:24.534 @path[10.0.0.2, 4421]: 17470 00:19:24.534 @path[10.0.0.2, 4421]: 17452 00:19:24.534 @path[10.0.0.2, 4421]: 17440 00:19:24.534 @path[10.0.0.2, 4421]: 17461 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81593 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:24.534 12:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:24.535 12:43:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:25.906 12:43:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:25.906 12:43:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81721 00:19:25.906 12:43:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:25.906 12:43:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:32.482 Attaching 4 probes... 00:19:32.482 @path[10.0.0.2, 4420]: 16446 00:19:32.482 @path[10.0.0.2, 4420]: 16655 00:19:32.482 @path[10.0.0.2, 4420]: 16645 00:19:32.482 @path[10.0.0.2, 4420]: 16720 00:19:32.482 @path[10.0.0.2, 4420]: 16921 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81721 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:32.482 [2024-07-15 12:44:04.723026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:32.482 12:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:39.037 12:44:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:39.037 12:44:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81891 00:19:39.037 12:44:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:39.037 12:44:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:44.318 12:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:44.318 12:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:44.881 Attaching 4 probes... 00:19:44.881 @path[10.0.0.2, 4421]: 16570 00:19:44.881 @path[10.0.0.2, 4421]: 17121 00:19:44.881 @path[10.0.0.2, 4421]: 17209 00:19:44.881 @path[10.0.0.2, 4421]: 17177 00:19:44.881 @path[10.0.0.2, 4421]: 17096 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81891 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81092 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81092 ']' 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81092 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81092 00:19:44.881 killing process with pid 81092 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81092' 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81092 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81092 00:19:44.881 Connection closed with partial response: 00:19:44.881 00:19:44.881 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81092 00:19:44.881 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:45.149 [2024-07-15 12:43:19.961782] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:45.149 [2024-07-15 12:43:19.961969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81092 ] 00:19:45.149 [2024-07-15 12:43:20.100159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.149 [2024-07-15 12:43:20.228872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.149 [2024-07-15 12:43:20.285296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:45.149 Running I/O for 90 seconds... 00:19:45.149 [2024-07-15 12:43:30.105749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.105830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.105891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.105914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.105938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.105954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.105976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.105992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.149 [2024-07-15 12:43:30.106181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.149 [2024-07-15 12:43:30.106219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.149 [2024-07-15 12:43:30.106279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.149 [2024-07-15 12:43:30.106318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.149 [2024-07-15 12:43:30.106356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.149 [2024-07-15 12:43:30.106394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.149 [2024-07-15 12:43:30.106432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.149 [2024-07-15 12:43:30.106470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.106981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.106996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.107017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.107045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.107069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.107085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.107112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.149 [2024-07-15 12:43:30.107136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.149 [2024-07-15 12:43:30.107159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.107712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.107766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.107803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.107841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.107878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.107915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.107954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.107976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.107993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.108039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.150 [2024-07-15 12:43:30.108666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.108704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.150 [2024-07-15 12:43:30.108757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.150 [2024-07-15 12:43:30.108779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.108795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.108817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.108838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.108860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.108875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.108897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.108913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.108943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.108959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.108981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.109005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.151 [2024-07-15 12:43:30.109855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.109892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.109929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.109967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.109989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.110413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.151 [2024-07-15 12:43:30.110429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.151 [2024-07-15 12:43:30.111940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.152 [2024-07-15 12:43:30.111972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:30.112571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:30.112587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.650321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.650408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.650449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.650487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.650524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.650561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.650599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.650636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.152 [2024-07-15 12:43:36.650673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.152 [2024-07-15 12:43:36.650710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.152 [2024-07-15 12:43:36.650790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.152 [2024-07-15 12:43:36.650831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.152 [2024-07-15 12:43:36.650869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.152 [2024-07-15 12:43:36.650906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.152 [2024-07-15 12:43:36.650943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.650964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.152 [2024-07-15 12:43:36.650979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.651018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.651037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.651062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.651079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.152 [2024-07-15 12:43:36.651100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.152 [2024-07-15 12:43:36.651116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.651806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.153 [2024-07-15 12:43:36.651843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.153 [2024-07-15 12:43:36.651880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.153 [2024-07-15 12:43:36.651917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.153 [2024-07-15 12:43:36.651954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.651976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.153 [2024-07-15 12:43:36.651991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.153 [2024-07-15 12:43:36.652028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.153 [2024-07-15 12:43:36.652067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.153 [2024-07-15 12:43:36.652105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.652142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.652179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.652216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.652260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.652330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.153 [2024-07-15 12:43:36.652370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.153 [2024-07-15 12:43:36.652392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.652868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.154 [2024-07-15 12:43:36.652906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.154 [2024-07-15 12:43:36.652943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.652965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.154 [2024-07-15 12:43:36.652980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.154 [2024-07-15 12:43:36.653018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.154 [2024-07-15 12:43:36.653054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.154 [2024-07-15 12:43:36.653091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.154 [2024-07-15 12:43:36.653128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.154 [2024-07-15 12:43:36.653165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.154 [2024-07-15 12:43:36.653665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.154 [2024-07-15 12:43:36.653687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.653702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.653744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.653762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.653784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.653800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.653822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.653837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.653858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.653873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.653895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.653910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.653932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.653947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.653968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.653984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.654468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.654483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.155 [2024-07-15 12:43:36.655260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.655967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.655982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.155 [2024-07-15 12:43:36.656012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.155 [2024-07-15 12:43:36.656028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:36.656082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:36.656128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:36.656185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:36.656234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:36.656280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:36.656325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:36.656372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:36.656418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:36.656476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:36.656509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:36.656526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:43.804090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:43.804174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:43.804215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:43.804252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:43.804314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:43.804355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:43.804392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.156 [2024-07-15 12:43:43.804429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.156 [2024-07-15 12:43:43.804983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.156 [2024-07-15 12:43:43.804998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.157 [2024-07-15 12:43:43.805859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.805969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.805991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.157 [2024-07-15 12:43:43.806410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.157 [2024-07-15 12:43:43.806426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.806463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.158 [2024-07-15 12:43:43.806933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.806970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.806992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 12:43:43.807385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 12:43:43.807407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.807422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.807459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.807496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.807533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.807575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.807613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.807656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.807695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.807743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.807782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.807819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.807856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.807893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.807929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.807966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.807987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.159 [2024-07-15 12:43:43.808461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.808509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.808547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.808584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.159 [2024-07-15 12:43:43.808629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.159 [2024-07-15 12:43:43.808650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:43.808665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.808687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:43.808702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.808738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:43.808756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.808778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:43.808794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.808815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:43.808830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.808852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:43.808866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.808888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:43.808903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.808925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:43.808940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.808961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:43.808976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.808997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:43.809012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.809033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:43.809049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:43.809449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:43.809475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.187836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:57.187916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.187946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:57.187963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.187979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:57.187994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:57.188025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:57.188055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:57.188085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:57.188115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.160 [2024-07-15 12:43:57.188145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.160 [2024-07-15 12:43:57.188530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.160 [2024-07-15 12:43:57.188546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.188560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.188589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.188618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.188648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.188677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.188716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.188760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.188790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.161 [2024-07-15 12:43:57.188820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.161 [2024-07-15 12:43:57.188850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.161 [2024-07-15 12:43:57.188881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.161 [2024-07-15 12:43:57.188911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.161 [2024-07-15 12:43:57.188941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.161 [2024-07-15 12:43:57.188971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.188987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.161 [2024-07-15 12:43:57.189002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.161 [2024-07-15 12:43:57.189032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.161 [2024-07-15 12:43:57.189498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.161 [2024-07-15 12:43:57.189514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.162 [2024-07-15 12:43:57.189528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.189967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.189983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.162 [2024-07-15 12:43:57.190283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.162 [2024-07-15 12:43:57.190315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.162 [2024-07-15 12:43:57.190345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.162 [2024-07-15 12:43:57.190377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.162 [2024-07-15 12:43:57.190407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.162 [2024-07-15 12:43:57.190437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.162 [2024-07-15 12:43:57.190467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.162 [2024-07-15 12:43:57.190483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.190497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.190534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.190971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.190987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.191001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.163 [2024-07-15 12:43:57.191031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.163 [2024-07-15 12:43:57.191433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.163 [2024-07-15 12:43:57.191447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.164 [2024-07-15 12:43:57.191483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.164 [2024-07-15 12:43:57.191514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.164 [2024-07-15 12:43:57.191544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.164 [2024-07-15 12:43:57.191928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.191983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.164 [2024-07-15 12:43:57.192000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.164 [2024-07-15 12:43:57.192012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67680 len:8 PRP1 0x0 PRP2 0x0 00:19:45.164 [2024-07-15 12:43:57.192025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.192087] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x253f6d0 was disconnected and freed. reset controller. 00:19:45.164 [2024-07-15 12:43:57.192193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.164 [2024-07-15 12:43:57.192219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.192235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.164 [2024-07-15 12:43:57.192251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.192265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.164 [2024-07-15 12:43:57.192278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.192293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.164 [2024-07-15 12:43:57.192306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.164 [2024-07-15 12:43:57.192320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b9100 is same with the state(5) to be set 00:19:45.164 [2024-07-15 12:43:57.193459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.164 [2024-07-15 12:43:57.193498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b9100 (9): Bad file descriptor 00:19:45.164 [2024-07-15 12:43:57.193860] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.164 [2024-07-15 12:43:57.193891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b9100 with addr=10.0.0.2, port=4421 00:19:45.164 [2024-07-15 12:43:57.193908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b9100 is same with the state(5) to be set 00:19:45.165 [2024-07-15 12:43:57.193964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b9100 (9): Bad file descriptor 00:19:45.165 [2024-07-15 12:43:57.194093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:45.165 [2024-07-15 12:43:57.194117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:45.165 [2024-07-15 12:43:57.194133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:45.165 [2024-07-15 12:43:57.194188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:45.165 [2024-07-15 12:43:57.194221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.165 [2024-07-15 12:44:07.257320] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:45.165 Received shutdown signal, test time was about 55.385497 seconds 00:19:45.165 00:19:45.165 Latency(us) 00:19:45.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.165 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:45.165 Verification LBA range: start 0x0 length 0x4000 00:19:45.165 Nvme0n1 : 55.38 7382.24 28.84 0.00 0.00 17305.93 845.27 7046430.72 00:19:45.165 =================================================================================================================== 00:19:45.165 Total : 7382.24 28.84 0.00 0.00 17305.93 845.27 7046430.72 00:19:45.165 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.428 rmmod nvme_tcp 00:19:45.428 rmmod nvme_fabrics 00:19:45.428 rmmod nvme_keyring 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 81042 ']' 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 81042 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81042 ']' 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81042 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.428 12:44:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81042 00:19:45.428 12:44:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.428 12:44:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.428 12:44:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81042' 00:19:45.428 killing process with pid 81042 00:19:45.428 12:44:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81042 00:19:45.428 12:44:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81042 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:45.689 00:19:45.689 real 1m1.389s 00:19:45.689 user 2m49.288s 00:19:45.689 sys 0m19.161s 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.689 12:44:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:45.689 ************************************ 00:19:45.689 END TEST nvmf_host_multipath 00:19:45.689 ************************************ 00:19:45.689 12:44:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:45.689 12:44:18 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:45.689 12:44:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:45.689 12:44:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.689 12:44:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.689 ************************************ 00:19:45.689 START TEST nvmf_timeout 00:19:45.689 ************************************ 00:19:45.689 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:45.947 * Looking for test storage... 00:19:45.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.947 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:45.948 Cannot find device "nvmf_tgt_br" 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.948 Cannot find device "nvmf_tgt_br2" 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:45.948 Cannot find device "nvmf_tgt_br" 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:45.948 Cannot find device "nvmf_tgt_br2" 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.948 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:46.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:46.206 00:19:46.206 --- 10.0.0.2 ping statistics --- 00:19:46.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.206 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:46.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:46.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:19:46.206 00:19:46.206 --- 10.0.0.3 ping statistics --- 00:19:46.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.206 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:46.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:46.206 00:19:46.206 --- 10.0.0.1 ping statistics --- 00:19:46.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.206 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82198 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:46.206 12:44:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82198 00:19:46.207 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82198 ']' 00:19:46.207 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.207 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.207 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.207 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.207 12:44:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:46.207 [2024-07-15 12:44:18.876899] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:46.207 [2024-07-15 12:44:18.877002] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.465 [2024-07-15 12:44:19.019126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:46.465 [2024-07-15 12:44:19.138671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.465 [2024-07-15 12:44:19.138722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.465 [2024-07-15 12:44:19.138752] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.465 [2024-07-15 12:44:19.138762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.465 [2024-07-15 12:44:19.138770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.465 [2024-07-15 12:44:19.138875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.465 [2024-07-15 12:44:19.139349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.722 [2024-07-15 12:44:19.193299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:47.288 12:44:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.288 12:44:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:47.288 12:44:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:47.288 12:44:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.288 12:44:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:47.288 12:44:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.288 12:44:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.288 12:44:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:47.596 [2024-07-15 12:44:20.133301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.596 12:44:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:47.862 Malloc0 00:19:47.862 12:44:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.119 12:44:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:48.377 12:44:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.635 [2024-07-15 12:44:21.268906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.635 12:44:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:48.635 12:44:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82252 00:19:48.635 12:44:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82252 /var/tmp/bdevperf.sock 00:19:48.635 12:44:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82252 ']' 00:19:48.635 12:44:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.635 12:44:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.635 12:44:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.635 12:44:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.635 12:44:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:48.893 [2024-07-15 12:44:21.341295] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:48.893 [2024-07-15 12:44:21.341400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82252 ] 00:19:48.893 [2024-07-15 12:44:21.484941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.150 [2024-07-15 12:44:21.616910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.150 [2024-07-15 12:44:21.676562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:49.716 12:44:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.716 12:44:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:49.716 12:44:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:49.973 12:44:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:50.538 NVMe0n1 00:19:50.538 12:44:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82275 00:19:50.538 12:44:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:50.538 12:44:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:50.538 Running I/O for 10 seconds... 00:19:51.469 12:44:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.731 [2024-07-15 12:44:24.182045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with t[2024-07-15 12:44:24.182144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(5) to be set 00:19:51.731 id:0 cdw10:00000000 cdw11:00000000 00:19:51.731 [2024-07-15 12:44:24.182166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.731 [2024-07-15 12:44:24.182185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.731 [2024-07-15 12:44:24.182194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 12:44:24.182203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.731 he state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.731 [2024-07-15 12:44:24.182222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.731 [2024-07-15 12:44:24.182226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.731 [2024-07-15 12:44:24.182231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.732 [2024-07-15 12:44:24.182240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 12:44:24.182249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 he state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with t[2024-07-15 12:44:24.182259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1609d40 is same he state(5) to be set 00:19:51.732 with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.182998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.183006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.183014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.183023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.183031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.183039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.183048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.183056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.183065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4e50 is same with the state(5) to be set 00:19:51.732 [2024-07-15 12:44:24.183120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.732 [2024-07-15 12:44:24.183641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.732 [2024-07-15 12:44:24.183651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.183982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.183995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.184984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.184996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.185006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.185018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.185028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.185039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.185050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.185062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.185073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.733 [2024-07-15 12:44:24.185086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.733 [2024-07-15 12:44:24.185096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.734 [2024-07-15 12:44:24.185719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.734 [2024-07-15 12:44:24.185752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.734 [2024-07-15 12:44:24.185774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.734 [2024-07-15 12:44:24.185808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.734 [2024-07-15 12:44:24.185829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.734 [2024-07-15 12:44:24.185851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.734 [2024-07-15 12:44:24.185873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.734 [2024-07-15 12:44:24.185884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.734 [2024-07-15 12:44:24.185894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.185906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.735 [2024-07-15 12:44:24.185922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.185939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.735 [2024-07-15 12:44:24.185949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.185961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.735 [2024-07-15 12:44:24.185971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.185982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.735 [2024-07-15 12:44:24.185992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.186004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.735 [2024-07-15 12:44:24.186018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.186030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.735 [2024-07-15 12:44:24.186040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.186052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.735 [2024-07-15 12:44:24.186061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.186073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.735 [2024-07-15 12:44:24.186083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.186094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.735 [2024-07-15 12:44:24.186104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.186115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16544d0 is same with the state(5) to be set 00:19:51.735 [2024-07-15 12:44:24.186127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.735 [2024-07-15 12:44:24.186135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.735 [2024-07-15 12:44:24.186144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:19:51.735 [2024-07-15 12:44:24.186646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.735 [2024-07-15 12:44:24.186717] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16544d0 was disconnected and freed. reset controller. 00:19:51.735 [2024-07-15 12:44:24.187242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.735 [2024-07-15 12:44:24.187274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1609d40 (9): Bad file descriptor 00:19:51.735 [2024-07-15 12:44:24.187715] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.735 [2024-07-15 12:44:24.187760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1609d40 with addr=10.0.0.2, port=4420 00:19:51.735 [2024-07-15 12:44:24.187774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1609d40 is same with the state(5) to be set 00:19:51.735 [2024-07-15 12:44:24.187797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1609d40 (9): Bad file descriptor 00:19:51.735 [2024-07-15 12:44:24.187815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:51.735 [2024-07-15 12:44:24.187826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:51.735 [2024-07-15 12:44:24.187838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.735 [2024-07-15 12:44:24.187860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.735 [2024-07-15 12:44:24.187880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.735 12:44:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:53.645 [2024-07-15 12:44:26.188067] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.645 [2024-07-15 12:44:26.188143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1609d40 with addr=10.0.0.2, port=4420 00:19:53.645 [2024-07-15 12:44:26.188161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1609d40 is same with the state(5) to be set 00:19:53.645 [2024-07-15 12:44:26.188190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1609d40 (9): Bad file descriptor 00:19:53.645 [2024-07-15 12:44:26.188211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.645 [2024-07-15 12:44:26.188223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.645 [2024-07-15 12:44:26.188235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.645 [2024-07-15 12:44:26.188264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.645 [2024-07-15 12:44:26.188277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.645 12:44:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:53.645 12:44:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:53.645 12:44:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:53.930 12:44:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:53.930 12:44:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:53.930 12:44:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:53.930 12:44:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:54.196 12:44:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:54.196 12:44:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:55.617 [2024-07-15 12:44:28.188443] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.617 [2024-07-15 12:44:28.188525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1609d40 with addr=10.0.0.2, port=4420 00:19:55.617 [2024-07-15 12:44:28.188544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1609d40 is same with the state(5) to be set 00:19:55.617 [2024-07-15 12:44:28.188574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1609d40 (9): Bad file descriptor 00:19:55.617 [2024-07-15 12:44:28.188596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.617 [2024-07-15 12:44:28.188607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:55.617 [2024-07-15 12:44:28.188618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.617 [2024-07-15 12:44:28.188648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:55.617 [2024-07-15 12:44:28.188661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.532 [2024-07-15 12:44:30.188708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.532 [2024-07-15 12:44:30.188779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.532 [2024-07-15 12:44:30.188793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.532 [2024-07-15 12:44:30.188804] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:57.532 [2024-07-15 12:44:30.188833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.926 00:19:58.926 Latency(us) 00:19:58.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.926 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:58.926 Verification LBA range: start 0x0 length 0x4000 00:19:58.926 NVMe0n1 : 8.14 963.10 3.76 15.72 0.00 130675.07 4051.32 7046430.72 00:19:58.926 =================================================================================================================== 00:19:58.926 Total : 963.10 3.76 15.72 0.00 130675.07 4051.32 7046430.72 00:19:58.926 0 00:19:59.184 12:44:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:59.184 12:44:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:59.184 12:44:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:59.443 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:59.443 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:59.443 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:59.443 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82275 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82252 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82252 ']' 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82252 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82252 00:19:59.703 killing process with pid 82252 00:19:59.703 Received shutdown signal, test time was about 9.305743 seconds 00:19:59.703 00:19:59.703 Latency(us) 00:19:59.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.703 =================================================================================================================== 00:19:59.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82252' 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82252 00:19:59.703 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82252 00:19:59.961 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.219 [2024-07-15 12:44:32.789324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.219 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82398 00:20:00.219 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:00.219 12:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82398 /var/tmp/bdevperf.sock 00:20:00.219 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82398 ']' 00:20:00.219 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.219 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.219 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.219 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.219 12:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:00.219 [2024-07-15 12:44:32.864946] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:00.219 [2024-07-15 12:44:32.865282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82398 ] 00:20:00.477 [2024-07-15 12:44:33.002885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.477 [2024-07-15 12:44:33.125137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.755 [2024-07-15 12:44:33.180011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:01.325 12:44:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.325 12:44:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:01.325 12:44:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:01.583 12:44:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:01.841 NVMe0n1 00:20:02.112 12:44:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82416 00:20:02.112 12:44:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.112 12:44:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:02.112 Running I/O for 10 seconds... 00:20:03.053 12:44:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.312 [2024-07-15 12:44:35.805147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a2a0 is same with the state(5) to be set 00:20:03.312 [2024-07-15 12:44:35.805470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a2a0 is same with the state(5) to be set 00:20:03.312 [2024-07-15 12:44:35.805721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a2a0 is same with the state(5) to be set 00:20:03.312 [2024-07-15 12:44:35.806085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.312 [2024-07-15 12:44:35.806117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.312 [2024-07-15 12:44:35.806160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.312 [2024-07-15 12:44:35.806183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.312 [2024-07-15 12:44:35.806205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.312 [2024-07-15 12:44:35.806228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.312 [2024-07-15 12:44:35.806250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.312 [2024-07-15 12:44:35.806272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.312 [2024-07-15 12:44:35.806294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.312 [2024-07-15 12:44:35.806316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.312 [2024-07-15 12:44:35.806338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.312 [2024-07-15 12:44:35.806350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.312 [2024-07-15 12:44:35.806360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.806944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.806987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.806999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.807009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.807030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.807051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.807073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.807094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.313 [2024-07-15 12:44:35.807115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.807575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.807598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.807620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.807642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.807664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.807686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.313 [2024-07-15 12:44:35.807708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.313 [2024-07-15 12:44:35.807720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.314 [2024-07-15 12:44:35.807979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.808834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.314 [2024-07-15 12:44:35.808855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.314 [2024-07-15 12:44:35.808877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.314 [2024-07-15 12:44:35.808917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.314 [2024-07-15 12:44:35.808939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.314 [2024-07-15 12:44:35.808961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.314 [2024-07-15 12:44:35.808983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.808995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.314 [2024-07-15 12:44:35.809005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.314 [2024-07-15 12:44:35.809027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.809048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.809070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.809398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.809423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.809446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.809470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.809493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.809514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.314 [2024-07-15 12:44:35.809536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.314 [2024-07-15 12:44:35.809548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.315 [2024-07-15 12:44:35.809558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.315 [2024-07-15 12:44:35.809580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.315 [2024-07-15 12:44:35.809602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.315 [2024-07-15 12:44:35.809623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.315 [2024-07-15 12:44:35.809645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.315 [2024-07-15 12:44:35.809667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.315 [2024-07-15 12:44:35.809690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.315 [2024-07-15 12:44:35.809713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.315 [2024-07-15 12:44:35.809748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.315 [2024-07-15 12:44:35.809771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.315 [2024-07-15 12:44:35.809794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.315 [2024-07-15 12:44:35.809816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.315 [2024-07-15 12:44:35.809838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.315 [2024-07-15 12:44:35.809860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10074d0 is same with the state(5) to be set 00:20:03.315 [2024-07-15 12:44:35.809888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.809897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.809905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70264 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.809915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.809934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.809943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70784 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.809952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.809970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.809978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70792 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.809988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.809998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70800 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70808 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70816 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70824 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70832 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70840 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70848 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70856 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70864 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70872 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70880 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70888 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70896 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.315 [2024-07-15 12:44:35.810818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.315 [2024-07-15 12:44:35.810825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.315 [2024-07-15 12:44:35.810833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70904 len:8 PRP1 0x0 PRP2 0x0 00:20:03.315 [2024-07-15 12:44:35.810843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.316 [2024-07-15 12:44:35.810853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.316 [2024-07-15 12:44:35.810861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.316 [2024-07-15 12:44:35.810869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70912 len:8 PRP1 0x0 PRP2 0x0 00:20:03.316 [2024-07-15 12:44:35.810878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.316 [2024-07-15 12:44:35.810888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.316 [2024-07-15 12:44:35.810896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.316 [2024-07-15 12:44:35.810904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70920 len:8 PRP1 0x0 PRP2 0x0 00:20:03.316 [2024-07-15 12:44:35.810913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.316 [2024-07-15 12:44:35.810923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.316 [2024-07-15 12:44:35.810931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.316 [2024-07-15 12:44:35.810939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70928 len:8 PRP1 0x0 PRP2 0x0 00:20:03.316 [2024-07-15 12:44:35.810949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.316 [2024-07-15 12:44:35.810958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.316 [2024-07-15 12:44:35.810967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.316 [2024-07-15 12:44:35.810975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70936 len:8 PRP1 0x0 PRP2 0x0 00:20:03.316 [2024-07-15 12:44:35.810984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.316 [2024-07-15 12:44:35.811048] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10074d0 was disconnected and freed. reset controller. 00:20:03.316 [2024-07-15 12:44:35.811157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.316 [2024-07-15 12:44:35.811176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.316 [2024-07-15 12:44:35.811188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.316 [2024-07-15 12:44:35.811197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.316 [2024-07-15 12:44:35.811208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.316 [2024-07-15 12:44:35.811218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.316 [2024-07-15 12:44:35.811229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.316 [2024-07-15 12:44:35.811238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.316 [2024-07-15 12:44:35.811248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbcd40 is same with the state(5) to be set 00:20:03.316 [2024-07-15 12:44:35.811482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.316 [2024-07-15 12:44:35.811508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbcd40 (9): Bad file descriptor 00:20:03.316 [2024-07-15 12:44:35.811609] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.316 [2024-07-15 12:44:35.811632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbcd40 with addr=10.0.0.2, port=4420 00:20:03.316 [2024-07-15 12:44:35.811644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbcd40 is same with the state(5) to be set 00:20:03.316 [2024-07-15 12:44:35.811663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbcd40 (9): Bad file descriptor 00:20:03.316 [2024-07-15 12:44:35.811680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:03.316 [2024-07-15 12:44:35.811691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:03.316 [2024-07-15 12:44:35.811702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:03.316 [2024-07-15 12:44:35.811723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:03.316 [2024-07-15 12:44:35.811753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.316 12:44:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:04.265 [2024-07-15 12:44:36.811914] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:04.265 [2024-07-15 12:44:36.812451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbcd40 with addr=10.0.0.2, port=4420 00:20:04.265 [2024-07-15 12:44:36.812979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbcd40 is same with the state(5) to be set 00:20:04.265 [2024-07-15 12:44:36.813388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbcd40 (9): Bad file descriptor 00:20:04.265 [2024-07-15 12:44:36.813811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.265 [2024-07-15 12:44:36.814209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:04.265 [2024-07-15 12:44:36.814612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.265 [2024-07-15 12:44:36.814866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:04.265 [2024-07-15 12:44:36.815091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.265 12:44:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.523 [2024-07-15 12:44:37.081690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.523 12:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82416 00:20:05.456 [2024-07-15 12:44:37.832496] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:12.036 00:20:12.036 Latency(us) 00:20:12.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.036 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.036 Verification LBA range: start 0x0 length 0x4000 00:20:12.036 NVMe0n1 : 10.01 6346.40 24.79 0.00 0.00 20123.53 1690.53 3019898.88 00:20:12.036 =================================================================================================================== 00:20:12.036 Total : 6346.40 24.79 0.00 0.00 20123.53 1690.53 3019898.88 00:20:12.036 0 00:20:12.036 12:44:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82521 00:20:12.036 12:44:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.036 12:44:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:12.294 Running I/O for 10 seconds... 00:20:13.226 12:44:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.502 [2024-07-15 12:44:45.959898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.502 [2024-07-15 12:44:45.959963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.959980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.502 [2024-07-15 12:44:45.959991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.502 [2024-07-15 12:44:45.960012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.502 [2024-07-15 12:44:45.960033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbcd40 is same with the state(5) to be set 00:20:13.502 [2024-07-15 12:44:45.960320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 12:44:45.960341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 12:44:45.960885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.502 [2024-07-15 12:44:45.960896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.960907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.960924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.960936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.960946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.960957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.960967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.960980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.960990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.961975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.961987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 12:44:45.962394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.503 [2024-07-15 12:44:45.962405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.962980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.962990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.504 [2024-07-15 12:44:45.963250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.504 [2024-07-15 12:44:45.963260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.963525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.963547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.963568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.963590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.963612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.963633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.963656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.963677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.963699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.963711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.963721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.964614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.964991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.965525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.965893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.966432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.966865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.967296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.967757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.968210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.968423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.968518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.505 [2024-07-15 12:44:45.968544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.968558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.505 [2024-07-15 12:44:45.968570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.968582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a620 is same with the state(5) to be set 00:20:13.505 [2024-07-15 12:44:45.968598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.505 [2024-07-15 12:44:45.968607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.505 [2024-07-15 12:44:45.968616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:20:13.505 [2024-07-15 12:44:45.968626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.505 [2024-07-15 12:44:45.968688] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x100a620 was disconnected and freed. reset controller. 00:20:13.505 [2024-07-15 12:44:45.968956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.505 [2024-07-15 12:44:45.968986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbcd40 (9): Bad file descriptor 00:20:13.505 [2024-07-15 12:44:45.969097] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.505 [2024-07-15 12:44:45.969122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbcd40 with addr=10.0.0.2, port=4420 00:20:13.505 [2024-07-15 12:44:45.969134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbcd40 is same with the state(5) to be set 00:20:13.505 [2024-07-15 12:44:45.969153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbcd40 (9): Bad file descriptor 00:20:13.505 [2024-07-15 12:44:45.969170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.505 [2024-07-15 12:44:45.969180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:13.505 [2024-07-15 12:44:45.969191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.505 [2024-07-15 12:44:45.969211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.505 [2024-07-15 12:44:45.969224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.505 12:44:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:14.438 [2024-07-15 12:44:46.969397] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.438 [2024-07-15 12:44:46.969491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbcd40 with addr=10.0.0.2, port=4420 00:20:14.438 [2024-07-15 12:44:46.969511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbcd40 is same with the state(5) to be set 00:20:14.438 [2024-07-15 12:44:46.969541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbcd40 (9): Bad file descriptor 00:20:14.438 [2024-07-15 12:44:46.969562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:14.438 [2024-07-15 12:44:46.969574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:14.438 [2024-07-15 12:44:46.969586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:14.438 [2024-07-15 12:44:46.969616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.438 [2024-07-15 12:44:46.969630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:15.382 [2024-07-15 12:44:47.969810] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:15.382 [2024-07-15 12:44:47.969889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbcd40 with addr=10.0.0.2, port=4420 00:20:15.382 [2024-07-15 12:44:47.969909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbcd40 is same with the state(5) to be set 00:20:15.382 [2024-07-15 12:44:47.969938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbcd40 (9): Bad file descriptor 00:20:15.382 [2024-07-15 12:44:47.969974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:15.382 [2024-07-15 12:44:47.969987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:15.382 [2024-07-15 12:44:47.969999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:15.382 [2024-07-15 12:44:47.970029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:15.382 [2024-07-15 12:44:47.970043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:16.314 [2024-07-15 12:44:48.973698] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:16.314 [2024-07-15 12:44:48.973787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbcd40 with addr=10.0.0.2, port=4420 00:20:16.314 [2024-07-15 12:44:48.973806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbcd40 is same with the state(5) to be set 00:20:16.314 [2024-07-15 12:44:48.974061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbcd40 (9): Bad file descriptor 00:20:16.314 [2024-07-15 12:44:48.974307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:16.314 [2024-07-15 12:44:48.974322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:16.314 [2024-07-15 12:44:48.974334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:16.314 [2024-07-15 12:44:48.978429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:16.314 [2024-07-15 12:44:48.978466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:16.314 12:44:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.571 [2024-07-15 12:44:49.190060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.571 12:44:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82521 00:20:17.507 [2024-07-15 12:44:50.010712] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:22.772 00:20:22.772 Latency(us) 00:20:22.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.772 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:22.772 Verification LBA range: start 0x0 length 0x4000 00:20:22.772 NVMe0n1 : 10.01 5379.42 21.01 3656.75 0.00 14128.46 677.70 3019898.88 00:20:22.772 =================================================================================================================== 00:20:22.773 Total : 5379.42 21.01 3656.75 0.00 14128.46 0.00 3019898.88 00:20:22.773 0 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82398 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82398 ']' 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82398 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82398 00:20:22.773 killing process with pid 82398 00:20:22.773 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.773 00:20:22.773 Latency(us) 00:20:22.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.773 =================================================================================================================== 00:20:22.773 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82398' 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82398 00:20:22.773 12:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82398 00:20:22.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.773 12:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82641 00:20:22.773 12:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:22.773 12:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82641 /var/tmp/bdevperf.sock 00:20:22.773 12:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82641 ']' 00:20:22.773 12:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.773 12:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.773 12:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.773 12:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.773 12:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:22.773 [2024-07-15 12:44:55.162741] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:22.773 [2024-07-15 12:44:55.162827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82641 ] 00:20:22.773 [2024-07-15 12:44:55.298285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.773 [2024-07-15 12:44:55.415926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.031 [2024-07-15 12:44:55.471019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:23.031 12:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.031 12:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:23.031 12:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82644 00:20:23.031 12:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82641 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:23.031 12:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:23.291 12:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:23.550 NVMe0n1 00:20:23.550 12:44:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82686 00:20:23.550 12:44:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.550 12:44:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:23.809 Running I/O for 10 seconds... 00:20:24.751 12:44:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.751 [2024-07-15 12:44:57.404373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.404995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.751 [2024-07-15 12:44:57.405632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.405973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647b80 is same with the state(5) to be set 00:20:24.752 [2024-07-15 12:44:57.406050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.406096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.406125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.406726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.406849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.406865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.406880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.406892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.406908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.406921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.406935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.406948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.406962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.406975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.406990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.752 [2024-07-15 12:44:57.407492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.752 [2024-07-15 12:44:57.407505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.407979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.407992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.408977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.408992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.409004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.409019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.409031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.409046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.409058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.409073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.409085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.409100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.409113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.409127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.409139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.409155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.409167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.409182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.753 [2024-07-15 12:44:57.409194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.753 [2024-07-15 12:44:57.409209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.409977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.409989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.754 [2024-07-15 12:44:57.410406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.754 [2024-07-15 12:44:57.410421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.410705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.410718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.411361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.411842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.412314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.412823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.413279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.413763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.414234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.414694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.415264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-07-15 12:44:57.415756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.415785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab8310 is same with the state(5) to be set 00:20:24.755 [2024-07-15 12:44:57.415804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:24.755 [2024-07-15 12:44:57.415816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:24.755 [2024-07-15 12:44:57.415829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73480 len:8 PRP1 0x0 PRP2 0x0 00:20:24.755 [2024-07-15 12:44:57.415841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.415923] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xab8310 was disconnected and freed. reset controller. 00:20:24.755 [2024-07-15 12:44:57.416053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.755 [2024-07-15 12:44:57.416074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.416090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.755 [2024-07-15 12:44:57.416102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.416115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.755 [2024-07-15 12:44:57.416127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.416140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.755 [2024-07-15 12:44:57.416152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.755 [2024-07-15 12:44:57.416163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49c00 is same with the state(5) to be set 00:20:24.755 [2024-07-15 12:44:57.416479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:24.755 [2024-07-15 12:44:57.416520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49c00 (9): Bad file descriptor 00:20:24.755 [2024-07-15 12:44:57.416711] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.755 [2024-07-15 12:44:57.416779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49c00 with addr=10.0.0.2, port=4420 00:20:24.755 [2024-07-15 12:44:57.416804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49c00 is same with the state(5) to be set 00:20:24.755 [2024-07-15 12:44:57.416840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49c00 (9): Bad file descriptor 00:20:24.755 [2024-07-15 12:44:57.416871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:24.755 [2024-07-15 12:44:57.416887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:24.755 [2024-07-15 12:44:57.416899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:24.755 [2024-07-15 12:44:57.416925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:24.755 [2024-07-15 12:44:57.416940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:24.755 12:44:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82686 00:20:27.290 [2024-07-15 12:44:59.417185] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:27.290 [2024-07-15 12:44:59.417268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49c00 with addr=10.0.0.2, port=4420 00:20:27.290 [2024-07-15 12:44:59.417288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49c00 is same with the state(5) to be set 00:20:27.290 [2024-07-15 12:44:59.417319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49c00 (9): Bad file descriptor 00:20:27.290 [2024-07-15 12:44:59.417340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:27.290 [2024-07-15 12:44:59.417351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:27.290 [2024-07-15 12:44:59.417363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:27.290 [2024-07-15 12:44:59.417392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:27.290 [2024-07-15 12:44:59.417404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:29.192 [2024-07-15 12:45:01.417651] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.192 [2024-07-15 12:45:01.417726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49c00 with addr=10.0.0.2, port=4420 00:20:29.192 [2024-07-15 12:45:01.417762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49c00 is same with the state(5) to be set 00:20:29.192 [2024-07-15 12:45:01.417792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49c00 (9): Bad file descriptor 00:20:29.192 [2024-07-15 12:45:01.417829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:29.192 [2024-07-15 12:45:01.417842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:29.192 [2024-07-15 12:45:01.417854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:29.192 [2024-07-15 12:45:01.417883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:29.192 [2024-07-15 12:45:01.417895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:31.091 [2024-07-15 12:45:03.417987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:31.091 [2024-07-15 12:45:03.418080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:31.091 [2024-07-15 12:45:03.418102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:31.091 [2024-07-15 12:45:03.418120] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:31.091 [2024-07-15 12:45:03.418164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:32.026 00:20:32.026 Latency(us) 00:20:32.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.026 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:32.026 NVMe0n1 : 8.18 2127.81 8.31 15.65 0.00 59777.29 8281.37 7046430.72 00:20:32.026 =================================================================================================================== 00:20:32.026 Total : 2127.81 8.31 15.65 0.00 59777.29 8281.37 7046430.72 00:20:32.026 0 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:32.026 Attaching 5 probes... 00:20:32.026 1314.819857: reset bdev controller NVMe0 00:20:32.026 1314.952334: reconnect bdev controller NVMe0 00:20:32.026 3315.342213: reconnect delay bdev controller NVMe0 00:20:32.026 3315.374999: reconnect bdev controller NVMe0 00:20:32.026 5315.809477: reconnect delay bdev controller NVMe0 00:20:32.026 5315.840167: reconnect bdev controller NVMe0 00:20:32.026 7316.299609: reconnect delay bdev controller NVMe0 00:20:32.026 7316.331062: reconnect bdev controller NVMe0 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82644 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82641 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82641 ']' 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82641 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82641 00:20:32.026 killing process with pid 82641 00:20:32.026 Received shutdown signal, test time was about 8.234618 seconds 00:20:32.026 00:20:32.026 Latency(us) 00:20:32.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.026 =================================================================================================================== 00:20:32.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82641' 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82641 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82641 00:20:32.026 12:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.284 12:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:32.284 12:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:32.284 12:45:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.284 12:45:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:32.542 12:45:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.542 12:45:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:32.542 12:45:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.542 12:45:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.542 rmmod nvme_tcp 00:20:32.542 rmmod nvme_fabrics 00:20:32.542 rmmod nvme_keyring 00:20:32.542 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.542 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:32.542 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:32.542 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82198 ']' 00:20:32.542 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82198 00:20:32.542 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82198 ']' 00:20:32.542 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82198 00:20:32.542 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:32.543 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.543 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82198 00:20:32.543 killing process with pid 82198 00:20:32.543 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:32.543 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:32.543 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82198' 00:20:32.543 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82198 00:20:32.543 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82198 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:32.802 ************************************ 00:20:32.802 END TEST nvmf_timeout 00:20:32.802 ************************************ 00:20:32.802 00:20:32.802 real 0m47.011s 00:20:32.802 user 2m17.639s 00:20:32.802 sys 0m6.119s 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.802 12:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:32.802 12:45:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:32.802 12:45:05 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:20:32.802 12:45:05 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:20:32.802 12:45:05 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:32.802 12:45:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:32.802 12:45:05 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:20:32.802 00:20:32.802 real 12m16.304s 00:20:32.802 user 29m42.464s 00:20:32.802 sys 3m11.814s 00:20:32.802 ************************************ 00:20:32.802 END TEST nvmf_tcp 00:20:32.802 ************************************ 00:20:32.802 12:45:05 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.802 12:45:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:32.802 12:45:05 -- common/autotest_common.sh@1142 -- # return 0 00:20:32.802 12:45:05 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:32.802 12:45:05 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:32.802 12:45:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:32.802 12:45:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.802 12:45:05 -- common/autotest_common.sh@10 -- # set +x 00:20:32.802 ************************************ 00:20:32.802 START TEST nvmf_dif 00:20:32.802 ************************************ 00:20:32.802 12:45:05 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:33.061 * Looking for test storage... 00:20:33.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:33.061 12:45:05 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.061 12:45:05 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.061 12:45:05 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.061 12:45:05 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.061 12:45:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.061 12:45:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.061 12:45:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.061 12:45:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:33.061 12:45:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.061 12:45:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:33.061 12:45:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:33.061 12:45:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:33.061 12:45:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:33.061 12:45:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.061 12:45:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:33.061 12:45:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:33.061 12:45:05 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:33.062 Cannot find device "nvmf_tgt_br" 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.062 Cannot find device "nvmf_tgt_br2" 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:33.062 Cannot find device "nvmf_tgt_br" 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:33.062 Cannot find device "nvmf_tgt_br2" 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:33.062 12:45:05 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:33.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:20:33.320 00:20:33.320 --- 10.0.0.2 ping statistics --- 00:20:33.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.320 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:33.320 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:33.320 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:20:33.320 00:20:33.320 --- 10.0.0.3 ping statistics --- 00:20:33.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.320 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:33.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:33.320 00:20:33.320 --- 10.0.0.1 ping statistics --- 00:20:33.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.320 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:33.320 12:45:05 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:33.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:33.577 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:33.577 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:33.577 12:45:06 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.577 12:45:06 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.577 12:45:06 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.577 12:45:06 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.577 12:45:06 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.577 12:45:06 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.834 12:45:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:33.834 12:45:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:33.834 12:45:06 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.834 12:45:06 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.834 12:45:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:33.834 12:45:06 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83121 00:20:33.834 12:45:06 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:33.834 12:45:06 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83121 00:20:33.834 12:45:06 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83121 ']' 00:20:33.834 12:45:06 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.834 12:45:06 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.834 12:45:06 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.835 12:45:06 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.835 12:45:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:33.835 [2024-07-15 12:45:06.351911] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:33.835 [2024-07-15 12:45:06.352009] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.835 [2024-07-15 12:45:06.488784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.092 [2024-07-15 12:45:06.608028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.092 [2024-07-15 12:45:06.608087] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.092 [2024-07-15 12:45:06.608100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.092 [2024-07-15 12:45:06.608109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.092 [2024-07-15 12:45:06.608117] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.092 [2024-07-15 12:45:06.608150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.092 [2024-07-15 12:45:06.662660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:20:34.657 12:45:07 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:34.657 12:45:07 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.657 12:45:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:34.657 12:45:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:34.657 [2024-07-15 12:45:07.295270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.657 12:45:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.657 12:45:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:34.657 ************************************ 00:20:34.657 START TEST fio_dif_1_default 00:20:34.657 ************************************ 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:34.657 bdev_null0 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.657 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:34.915 [2024-07-15 12:45:07.347386] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.915 { 00:20:34.915 "params": { 00:20:34.915 "name": "Nvme$subsystem", 00:20:34.915 "trtype": "$TEST_TRANSPORT", 00:20:34.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.915 "adrfam": "ipv4", 00:20:34.915 "trsvcid": "$NVMF_PORT", 00:20:34.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.915 "hdgst": ${hdgst:-false}, 00:20:34.915 "ddgst": ${ddgst:-false} 00:20:34.915 }, 00:20:34.915 "method": "bdev_nvme_attach_controller" 00:20:34.915 } 00:20:34.915 EOF 00:20:34.915 )") 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:34.915 "params": { 00:20:34.915 "name": "Nvme0", 00:20:34.915 "trtype": "tcp", 00:20:34.915 "traddr": "10.0.0.2", 00:20:34.915 "adrfam": "ipv4", 00:20:34.915 "trsvcid": "4420", 00:20:34.915 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.915 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:34.915 "hdgst": false, 00:20:34.915 "ddgst": false 00:20:34.915 }, 00:20:34.915 "method": "bdev_nvme_attach_controller" 00:20:34.915 }' 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:34.915 12:45:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.915 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:34.915 fio-3.35 00:20:34.915 Starting 1 thread 00:20:47.119 00:20:47.119 filename0: (groupid=0, jobs=1): err= 0: pid=83183: Mon Jul 15 12:45:18 2024 00:20:47.119 read: IOPS=8457, BW=33.0MiB/s (34.6MB/s)(330MiB/10001msec) 00:20:47.119 slat (usec): min=6, max=409, avg= 8.61, stdev= 3.89 00:20:47.119 clat (usec): min=405, max=5760, avg=447.58, stdev=52.04 00:20:47.119 lat (usec): min=413, max=5787, avg=456.19, stdev=52.47 00:20:47.119 clat percentiles (usec): 00:20:47.119 | 1.00th=[ 412], 5.00th=[ 416], 10.00th=[ 420], 20.00th=[ 429], 00:20:47.119 | 30.00th=[ 433], 40.00th=[ 437], 50.00th=[ 441], 60.00th=[ 445], 00:20:47.119 | 70.00th=[ 453], 80.00th=[ 457], 90.00th=[ 469], 95.00th=[ 506], 00:20:47.119 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 701], 99.95th=[ 832], 00:20:47.119 | 99.99th=[ 1037] 00:20:47.119 bw ( KiB/s): min=31744, max=34688, per=100.00%, avg=33913.26, stdev=931.10, samples=19 00:20:47.119 iops : min= 7936, max= 8672, avg=8478.32, stdev=232.77, samples=19 00:20:47.119 lat (usec) : 500=94.80%, 750=5.13%, 1000=0.05% 00:20:47.119 lat (msec) : 2=0.02%, 10=0.01% 00:20:47.119 cpu : usr=83.04%, sys=14.72%, ctx=94, majf=0, minf=0 00:20:47.119 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.119 issued rwts: total=84580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.119 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:47.119 00:20:47.120 Run status group 0 (all jobs): 00:20:47.120 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=330MiB (346MB), run=10001-10001msec 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 00:20:47.120 real 0m11.039s 00:20:47.120 user 0m8.964s 00:20:47.120 sys 0m1.759s 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:47.120 ************************************ 00:20:47.120 END TEST fio_dif_1_default 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 ************************************ 00:20:47.120 12:45:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:47.120 12:45:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:47.120 12:45:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:47.120 12:45:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 ************************************ 00:20:47.120 START TEST fio_dif_1_multi_subsystems 00:20:47.120 ************************************ 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 bdev_null0 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 [2024-07-15 12:45:18.432652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 bdev_null1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.120 { 00:20:47.120 "params": { 00:20:47.120 "name": "Nvme$subsystem", 00:20:47.120 "trtype": "$TEST_TRANSPORT", 00:20:47.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.120 "adrfam": "ipv4", 00:20:47.120 "trsvcid": "$NVMF_PORT", 00:20:47.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.120 "hdgst": ${hdgst:-false}, 00:20:47.120 "ddgst": ${ddgst:-false} 00:20:47.120 }, 00:20:47.120 "method": "bdev_nvme_attach_controller" 00:20:47.120 } 00:20:47.120 EOF 00:20:47.120 )") 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.120 { 00:20:47.120 "params": { 00:20:47.120 "name": "Nvme$subsystem", 00:20:47.120 "trtype": "$TEST_TRANSPORT", 00:20:47.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.120 "adrfam": "ipv4", 00:20:47.120 "trsvcid": "$NVMF_PORT", 00:20:47.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.120 "hdgst": ${hdgst:-false}, 00:20:47.120 "ddgst": ${ddgst:-false} 00:20:47.120 }, 00:20:47.120 "method": "bdev_nvme_attach_controller" 00:20:47.120 } 00:20:47.120 EOF 00:20:47.120 )") 00:20:47.120 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:47.121 "params": { 00:20:47.121 "name": "Nvme0", 00:20:47.121 "trtype": "tcp", 00:20:47.121 "traddr": "10.0.0.2", 00:20:47.121 "adrfam": "ipv4", 00:20:47.121 "trsvcid": "4420", 00:20:47.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:47.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:47.121 "hdgst": false, 00:20:47.121 "ddgst": false 00:20:47.121 }, 00:20:47.121 "method": "bdev_nvme_attach_controller" 00:20:47.121 },{ 00:20:47.121 "params": { 00:20:47.121 "name": "Nvme1", 00:20:47.121 "trtype": "tcp", 00:20:47.121 "traddr": "10.0.0.2", 00:20:47.121 "adrfam": "ipv4", 00:20:47.121 "trsvcid": "4420", 00:20:47.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.121 "hdgst": false, 00:20:47.121 "ddgst": false 00:20:47.121 }, 00:20:47.121 "method": "bdev_nvme_attach_controller" 00:20:47.121 }' 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:47.121 12:45:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.121 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:47.121 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:47.121 fio-3.35 00:20:47.121 Starting 2 threads 00:20:57.108 00:20:57.108 filename0: (groupid=0, jobs=1): err= 0: pid=83343: Mon Jul 15 12:45:29 2024 00:20:57.108 read: IOPS=4555, BW=17.8MiB/s (18.7MB/s)(178MiB/10001msec) 00:20:57.108 slat (usec): min=7, max=410, avg=13.21, stdev= 3.57 00:20:57.108 clat (usec): min=660, max=5286, avg=842.57, stdev=73.71 00:20:57.108 lat (usec): min=670, max=5306, avg=855.79, stdev=74.06 00:20:57.108 clat percentiles (usec): 00:20:57.108 | 1.00th=[ 717], 5.00th=[ 758], 10.00th=[ 783], 20.00th=[ 799], 00:20:57.108 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[ 832], 60.00th=[ 848], 00:20:57.108 | 70.00th=[ 865], 80.00th=[ 898], 90.00th=[ 930], 95.00th=[ 955], 00:20:57.108 | 99.00th=[ 996], 99.50th=[ 1012], 99.90th=[ 1057], 99.95th=[ 1336], 00:20:57.108 | 99.99th=[ 1418] 00:20:57.108 bw ( KiB/s): min=17696, max=19104, per=49.94%, avg=18202.74, stdev=441.20, samples=19 00:20:57.108 iops : min= 4424, max= 4776, avg=4550.68, stdev=110.30, samples=19 00:20:57.108 lat (usec) : 750=4.29%, 1000=94.95% 00:20:57.108 lat (msec) : 2=0.76%, 10=0.01% 00:20:57.108 cpu : usr=89.40%, sys=9.17%, ctx=18, majf=0, minf=9 00:20:57.108 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.108 issued rwts: total=45564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.108 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:57.108 filename1: (groupid=0, jobs=1): err= 0: pid=83344: Mon Jul 15 12:45:29 2024 00:20:57.108 read: IOPS=4556, BW=17.8MiB/s (18.7MB/s)(178MiB/10001msec) 00:20:57.108 slat (nsec): min=5342, max=83718, avg=13411.44, stdev=2967.23 00:20:57.108 clat (usec): min=474, max=5511, avg=840.58, stdev=71.46 00:20:57.108 lat (usec): min=481, max=5543, avg=853.99, stdev=71.63 00:20:57.108 clat percentiles (usec): 00:20:57.108 | 1.00th=[ 758], 5.00th=[ 775], 10.00th=[ 783], 20.00th=[ 791], 00:20:57.108 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[ 824], 60.00th=[ 840], 00:20:57.108 | 70.00th=[ 865], 80.00th=[ 889], 90.00th=[ 930], 95.00th=[ 947], 00:20:57.108 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1037], 99.95th=[ 1319], 00:20:57.108 | 99.99th=[ 1467] 00:20:57.108 bw ( KiB/s): min=17696, max=19104, per=49.95%, avg=18206.11, stdev=445.21, samples=19 00:20:57.108 iops : min= 4424, max= 4776, avg=4551.53, stdev=111.30, samples=19 00:20:57.108 lat (usec) : 500=0.01%, 750=0.30%, 1000=99.39% 00:20:57.108 lat (msec) : 2=0.29%, 10=0.01% 00:20:57.108 cpu : usr=89.29%, sys=9.41%, ctx=22, majf=0, minf=0 00:20:57.108 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.108 issued rwts: total=45572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.108 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:57.108 00:20:57.108 Run status group 0 (all jobs): 00:20:57.108 READ: bw=35.6MiB/s (37.3MB/s), 17.8MiB/s-17.8MiB/s (18.7MB/s-18.7MB/s), io=356MiB (373MB), run=10001-10001msec 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.108 00:20:57.108 real 0m11.144s 00:20:57.108 user 0m18.616s 00:20:57.108 sys 0m2.159s 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.108 12:45:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 ************************************ 00:20:57.108 END TEST fio_dif_1_multi_subsystems 00:20:57.108 ************************************ 00:20:57.109 12:45:29 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:57.109 12:45:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:57.109 12:45:29 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:57.109 12:45:29 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.109 12:45:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:57.109 ************************************ 00:20:57.109 START TEST fio_dif_rand_params 00:20:57.109 ************************************ 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:57.109 bdev_null0 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:57.109 [2024-07-15 12:45:29.629416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.109 { 00:20:57.109 "params": { 00:20:57.109 "name": "Nvme$subsystem", 00:20:57.109 "trtype": "$TEST_TRANSPORT", 00:20:57.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.109 "adrfam": "ipv4", 00:20:57.109 "trsvcid": "$NVMF_PORT", 00:20:57.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.109 "hdgst": ${hdgst:-false}, 00:20:57.109 "ddgst": ${ddgst:-false} 00:20:57.109 }, 00:20:57.109 "method": "bdev_nvme_attach_controller" 00:20:57.109 } 00:20:57.109 EOF 00:20:57.109 )") 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:57.109 "params": { 00:20:57.109 "name": "Nvme0", 00:20:57.109 "trtype": "tcp", 00:20:57.109 "traddr": "10.0.0.2", 00:20:57.109 "adrfam": "ipv4", 00:20:57.109 "trsvcid": "4420", 00:20:57.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:57.109 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:57.109 "hdgst": false, 00:20:57.109 "ddgst": false 00:20:57.109 }, 00:20:57.109 "method": "bdev_nvme_attach_controller" 00:20:57.109 }' 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:57.109 12:45:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:57.367 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:57.367 ... 00:20:57.367 fio-3.35 00:20:57.367 Starting 3 threads 00:21:03.945 00:21:03.945 filename0: (groupid=0, jobs=1): err= 0: pid=83500: Mon Jul 15 12:45:35 2024 00:21:03.945 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(156MiB/5007msec) 00:21:03.945 slat (nsec): min=7893, max=46900, avg=20311.95, stdev=7410.11 00:21:03.945 clat (usec): min=11501, max=26171, avg=11987.22, stdev=1807.85 00:21:03.945 lat (usec): min=11514, max=26207, avg=12007.53, stdev=1808.04 00:21:03.945 clat percentiles (usec): 00:21:03.945 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:21:03.945 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:21:03.945 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11731], 95.00th=[12125], 00:21:03.945 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:21:03.945 | 99.99th=[26084] 00:21:03.945 bw ( KiB/s): min=29184, max=33024, per=33.32%, avg=31872.00, stdev=1459.42, samples=10 00:21:03.945 iops : min= 228, max= 258, avg=249.00, stdev=11.40, samples=10 00:21:03.945 lat (msec) : 20=98.56%, 50=1.44% 00:21:03.945 cpu : usr=91.79%, sys=7.67%, ctx=4, majf=0, minf=9 00:21:03.945 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.945 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.945 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:03.945 filename0: (groupid=0, jobs=1): err= 0: pid=83501: Mon Jul 15 12:45:35 2024 00:21:03.945 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(156MiB/5006msec) 00:21:03.945 slat (nsec): min=8053, max=46119, avg=20200.10, stdev=7191.44 00:21:03.945 clat (usec): min=11546, max=26180, avg=11985.55, stdev=1807.71 00:21:03.945 lat (usec): min=11564, max=26215, avg=12005.75, stdev=1808.02 00:21:03.945 clat percentiles (usec): 00:21:03.945 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:21:03.945 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:21:03.945 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11731], 95.00th=[12125], 00:21:03.945 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:21:03.945 | 99.99th=[26084] 00:21:03.945 bw ( KiB/s): min=29184, max=33024, per=33.33%, avg=31878.40, stdev=1461.43, samples=10 00:21:03.945 iops : min= 228, max= 258, avg=249.00, stdev=11.40, samples=10 00:21:03.945 lat (msec) : 20=98.56%, 50=1.44% 00:21:03.945 cpu : usr=91.23%, sys=8.17%, ctx=13, majf=0, minf=9 00:21:03.946 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.946 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.946 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:03.946 filename0: (groupid=0, jobs=1): err= 0: pid=83502: Mon Jul 15 12:45:35 2024 00:21:03.946 read: IOPS=249, BW=31.1MiB/s (32.7MB/s)(156MiB/5010msec) 00:21:03.946 slat (usec): min=5, max=393, avg=19.78, stdev=13.85 00:21:03.946 clat (usec): min=11278, max=26165, avg=11993.16, stdev=1817.39 00:21:03.946 lat (usec): min=11288, max=26201, avg=12012.94, stdev=1818.14 00:21:03.946 clat percentiles (usec): 00:21:03.946 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:21:03.946 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:21:03.946 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11731], 95.00th=[12125], 00:21:03.946 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:21:03.946 | 99.99th=[26084] 00:21:03.946 bw ( KiB/s): min=29184, max=33024, per=33.32%, avg=31872.00, stdev=1459.42, samples=10 00:21:03.946 iops : min= 228, max= 258, avg=249.00, stdev=11.40, samples=10 00:21:03.946 lat (msec) : 20=98.56%, 50=1.44% 00:21:03.946 cpu : usr=90.36%, sys=8.76%, ctx=60, majf=0, minf=9 00:21:03.946 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.946 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.946 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:03.946 00:21:03.946 Run status group 0 (all jobs): 00:21:03.946 READ: bw=93.4MiB/s (97.9MB/s), 31.1MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=468MiB (491MB), run=5006-5010msec 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 bdev_null0 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 [2024-07-15 12:45:35.622207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 bdev_null1 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 bdev_null2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.946 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.946 { 00:21:03.946 "params": { 00:21:03.946 "name": "Nvme$subsystem", 00:21:03.946 "trtype": "$TEST_TRANSPORT", 00:21:03.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.946 "adrfam": "ipv4", 00:21:03.946 "trsvcid": "$NVMF_PORT", 00:21:03.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.946 "hdgst": ${hdgst:-false}, 00:21:03.946 "ddgst": ${ddgst:-false} 00:21:03.946 }, 00:21:03.946 "method": "bdev_nvme_attach_controller" 00:21:03.946 } 00:21:03.946 EOF 00:21:03.946 )") 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.947 { 00:21:03.947 "params": { 00:21:03.947 "name": "Nvme$subsystem", 00:21:03.947 "trtype": "$TEST_TRANSPORT", 00:21:03.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.947 "adrfam": "ipv4", 00:21:03.947 "trsvcid": "$NVMF_PORT", 00:21:03.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.947 "hdgst": ${hdgst:-false}, 00:21:03.947 "ddgst": ${ddgst:-false} 00:21:03.947 }, 00:21:03.947 "method": "bdev_nvme_attach_controller" 00:21:03.947 } 00:21:03.947 EOF 00:21:03.947 )") 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.947 { 00:21:03.947 "params": { 00:21:03.947 "name": "Nvme$subsystem", 00:21:03.947 "trtype": "$TEST_TRANSPORT", 00:21:03.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.947 "adrfam": "ipv4", 00:21:03.947 "trsvcid": "$NVMF_PORT", 00:21:03.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.947 "hdgst": ${hdgst:-false}, 00:21:03.947 "ddgst": ${ddgst:-false} 00:21:03.947 }, 00:21:03.947 "method": "bdev_nvme_attach_controller" 00:21:03.947 } 00:21:03.947 EOF 00:21:03.947 )") 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:03.947 "params": { 00:21:03.947 "name": "Nvme0", 00:21:03.947 "trtype": "tcp", 00:21:03.947 "traddr": "10.0.0.2", 00:21:03.947 "adrfam": "ipv4", 00:21:03.947 "trsvcid": "4420", 00:21:03.947 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.947 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:03.947 "hdgst": false, 00:21:03.947 "ddgst": false 00:21:03.947 }, 00:21:03.947 "method": "bdev_nvme_attach_controller" 00:21:03.947 },{ 00:21:03.947 "params": { 00:21:03.947 "name": "Nvme1", 00:21:03.947 "trtype": "tcp", 00:21:03.947 "traddr": "10.0.0.2", 00:21:03.947 "adrfam": "ipv4", 00:21:03.947 "trsvcid": "4420", 00:21:03.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.947 "hdgst": false, 00:21:03.947 "ddgst": false 00:21:03.947 }, 00:21:03.947 "method": "bdev_nvme_attach_controller" 00:21:03.947 },{ 00:21:03.947 "params": { 00:21:03.947 "name": "Nvme2", 00:21:03.947 "trtype": "tcp", 00:21:03.947 "traddr": "10.0.0.2", 00:21:03.947 "adrfam": "ipv4", 00:21:03.947 "trsvcid": "4420", 00:21:03.947 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:03.947 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:03.947 "hdgst": false, 00:21:03.947 "ddgst": false 00:21:03.947 }, 00:21:03.947 "method": "bdev_nvme_attach_controller" 00:21:03.947 }' 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:03.947 12:45:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:03.947 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:03.947 ... 00:21:03.947 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:03.947 ... 00:21:03.947 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:03.947 ... 00:21:03.947 fio-3.35 00:21:03.947 Starting 24 threads 00:21:16.138 00:21:16.138 filename0: (groupid=0, jobs=1): err= 0: pid=83597: Mon Jul 15 12:45:46 2024 00:21:16.138 read: IOPS=214, BW=859KiB/s (879kB/s)(8632KiB/10051msec) 00:21:16.138 slat (usec): min=5, max=5024, avg=16.63, stdev=108.04 00:21:16.138 clat (usec): min=1576, max=202372, avg=74364.07, stdev=32125.40 00:21:16.138 lat (usec): min=1587, max=202387, avg=74380.71, stdev=32124.91 00:21:16.138 clat percentiles (usec): 00:21:16.138 | 1.00th=[ 1713], 5.00th=[ 3228], 10.00th=[ 38536], 20.00th=[ 50594], 00:21:16.138 | 30.00th=[ 60031], 40.00th=[ 71828], 50.00th=[ 71828], 60.00th=[ 80217], 00:21:16.138 | 70.00th=[ 84411], 80.00th=[ 96994], 90.00th=[120062], 95.00th=[128451], 00:21:16.138 | 99.00th=[143655], 99.50th=[145753], 99.90th=[154141], 99.95th=[154141], 00:21:16.138 | 99.99th=[202376] 00:21:16.138 bw ( KiB/s): min= 504, max= 2336, per=4.55%, avg=856.80, stdev=378.00, samples=20 00:21:16.138 iops : min= 126, max= 584, avg=214.20, stdev=94.50, samples=20 00:21:16.138 lat (msec) : 2=1.85%, 4=3.99%, 10=1.48%, 20=0.83%, 50=11.35% 00:21:16.139 lat (msec) : 100=61.72%, 250=18.77% 00:21:16.139 cpu : usr=37.35%, sys=2.03%, ctx=981, majf=0, minf=0 00:21:16.139 IO depths : 1=0.4%, 2=0.9%, 4=2.4%, 8=80.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:16.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.139 filename0: (groupid=0, jobs=1): err= 0: pid=83598: Mon Jul 15 12:45:46 2024 00:21:16.139 read: IOPS=201, BW=808KiB/s (827kB/s)(8096KiB/10025msec) 00:21:16.139 slat (usec): min=7, max=3679, avg=18.00, stdev=81.83 00:21:16.139 clat (msec): min=26, max=154, avg=79.08, stdev=24.22 00:21:16.139 lat (msec): min=26, max=154, avg=79.10, stdev=24.22 00:21:16.139 clat percentiles (msec): 00:21:16.139 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 57], 00:21:16.139 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:21:16.139 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 120], 95.00th=[ 128], 00:21:16.139 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:21:16.139 | 99.99th=[ 155] 00:21:16.139 bw ( KiB/s): min= 560, max= 1026, per=4.28%, avg=805.70, stdev=146.25, samples=20 00:21:16.139 iops : min= 140, max= 256, avg=201.40, stdev=36.52, samples=20 00:21:16.139 lat (msec) : 50=12.10%, 100=71.20%, 250=16.70% 00:21:16.139 cpu : usr=37.47%, sys=2.33%, ctx=1184, majf=0, minf=9 00:21:16.139 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:16.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.139 filename0: (groupid=0, jobs=1): err= 0: pid=83599: Mon Jul 15 12:45:46 2024 00:21:16.139 read: IOPS=197, BW=788KiB/s (807kB/s)(7892KiB/10012msec) 00:21:16.139 slat (usec): min=8, max=8027, avg=26.58, stdev=270.53 00:21:16.139 clat (msec): min=23, max=166, avg=81.06, stdev=24.90 00:21:16.139 lat (msec): min=23, max=166, avg=81.09, stdev=24.90 00:21:16.139 clat percentiles (msec): 00:21:16.139 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 60], 00:21:16.139 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 82], 00:21:16.139 | 70.00th=[ 88], 80.00th=[ 104], 90.00th=[ 121], 95.00th=[ 129], 00:21:16.139 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 167], 00:21:16.139 | 99.99th=[ 167] 00:21:16.139 bw ( KiB/s): min= 608, max= 993, per=4.14%, avg=779.84, stdev=137.58, samples=19 00:21:16.139 iops : min= 152, max= 248, avg=194.95, stdev=34.37, samples=19 00:21:16.139 lat (msec) : 50=9.93%, 100=69.59%, 250=20.48% 00:21:16.139 cpu : usr=41.10%, sys=2.34%, ctx=1395, majf=0, minf=9 00:21:16.139 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:16.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 issued rwts: total=1973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.139 filename0: (groupid=0, jobs=1): err= 0: pid=83600: Mon Jul 15 12:45:46 2024 00:21:16.139 read: IOPS=181, BW=724KiB/s (741kB/s)(7268KiB/10037msec) 00:21:16.139 slat (usec): min=4, max=8033, avg=19.20, stdev=188.26 00:21:16.139 clat (msec): min=45, max=160, avg=88.20, stdev=24.67 00:21:16.139 lat (msec): min=45, max=160, avg=88.22, stdev=24.68 00:21:16.139 clat percentiles (msec): 00:21:16.139 | 1.00th=[ 47], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 72], 00:21:16.139 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 85], 00:21:16.139 | 70.00th=[ 96], 80.00th=[ 112], 90.00th=[ 128], 95.00th=[ 133], 00:21:16.139 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:21:16.139 | 99.99th=[ 161] 00:21:16.139 bw ( KiB/s): min= 512, max= 912, per=3.82%, avg=720.20, stdev=130.25, samples=20 00:21:16.139 iops : min= 128, max= 228, avg=180.00, stdev=32.54, samples=20 00:21:16.139 lat (msec) : 50=3.41%, 100=68.08%, 250=28.51% 00:21:16.139 cpu : usr=32.26%, sys=2.10%, ctx=931, majf=0, minf=9 00:21:16.139 IO depths : 1=0.1%, 2=2.6%, 4=10.3%, 8=72.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:16.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 complete : 0=0.0%, 4=90.2%, 8=7.5%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 issued rwts: total=1817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.139 filename0: (groupid=0, jobs=1): err= 0: pid=83601: Mon Jul 15 12:45:46 2024 00:21:16.139 read: IOPS=210, BW=843KiB/s (863kB/s)(8432KiB/10002msec) 00:21:16.139 slat (usec): min=7, max=8034, avg=29.64, stdev=311.23 00:21:16.139 clat (msec): min=2, max=157, avg=75.78, stdev=28.86 00:21:16.139 lat (msec): min=2, max=157, avg=75.81, stdev=28.86 00:21:16.139 clat percentiles (msec): 00:21:16.139 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 50], 00:21:16.139 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:21:16.139 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 121], 95.00th=[ 129], 00:21:16.139 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 159], 00:21:16.139 | 99.99th=[ 159] 00:21:16.139 bw ( KiB/s): min= 560, max= 1024, per=4.29%, avg=807.16, stdev=152.88, samples=19 00:21:16.139 iops : min= 140, max= 256, avg=201.79, stdev=38.22, samples=19 00:21:16.139 lat (msec) : 4=1.47%, 10=2.13%, 50=16.89%, 100=61.67%, 250=17.84% 00:21:16.139 cpu : usr=35.08%, sys=2.30%, ctx=1079, majf=0, minf=9 00:21:16.139 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:16.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.139 filename0: (groupid=0, jobs=1): err= 0: pid=83602: Mon Jul 15 12:45:46 2024 00:21:16.139 read: IOPS=202, BW=809KiB/s (828kB/s)(8092KiB/10005msec) 00:21:16.139 slat (usec): min=5, max=8044, avg=26.62, stdev=267.61 00:21:16.139 clat (usec): min=1886, max=155195, avg=78995.75, stdev=27108.62 00:21:16.139 lat (usec): min=1900, max=155209, avg=79022.37, stdev=27102.49 00:21:16.139 clat percentiles (msec): 00:21:16.139 | 1.00th=[ 5], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 60], 00:21:16.139 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:21:16.139 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 121], 95.00th=[ 131], 00:21:16.139 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:21:16.139 | 99.99th=[ 157] 00:21:16.139 bw ( KiB/s): min= 608, max= 1000, per=4.12%, avg=776.84, stdev=131.37, samples=19 00:21:16.139 iops : min= 152, max= 250, avg=194.21, stdev=32.84, samples=19 00:21:16.139 lat (msec) : 2=0.15%, 4=0.49%, 10=2.32%, 50=10.58%, 100=67.97% 00:21:16.139 lat (msec) : 250=18.49% 00:21:16.139 cpu : usr=34.32%, sys=2.26%, ctx=980, majf=0, minf=9 00:21:16.139 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=77.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:16.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.139 filename0: (groupid=0, jobs=1): err= 0: pid=83603: Mon Jul 15 12:45:46 2024 00:21:16.139 read: IOPS=207, BW=829KiB/s (849kB/s)(8300KiB/10007msec) 00:21:16.139 slat (usec): min=3, max=8041, avg=30.50, stdev=317.13 00:21:16.139 clat (msec): min=3, max=168, avg=76.98, stdev=26.69 00:21:16.139 lat (msec): min=3, max=168, avg=77.01, stdev=26.70 00:21:16.139 clat percentiles (msec): 00:21:16.139 | 1.00th=[ 10], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 54], 00:21:16.139 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:21:16.139 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 121], 95.00th=[ 131], 00:21:16.139 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 169], 00:21:16.139 | 99.99th=[ 169] 00:21:16.139 bw ( KiB/s): min= 560, max= 1024, per=4.32%, avg=813.47, stdev=155.86, samples=19 00:21:16.139 iops : min= 140, max= 256, avg=203.37, stdev=38.96, samples=19 00:21:16.139 lat (msec) : 4=0.14%, 10=1.25%, 50=15.42%, 100=64.72%, 250=18.46% 00:21:16.139 cpu : usr=36.40%, sys=2.18%, ctx=1013, majf=0, minf=9 00:21:16.139 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:16.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 issued rwts: total=2075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.139 filename0: (groupid=0, jobs=1): err= 0: pid=83604: Mon Jul 15 12:45:46 2024 00:21:16.139 read: IOPS=213, BW=854KiB/s (875kB/s)(8544KiB/10001msec) 00:21:16.139 slat (usec): min=4, max=8041, avg=30.99, stdev=346.72 00:21:16.139 clat (usec): min=906, max=157838, avg=74784.94, stdev=28896.81 00:21:16.139 lat (usec): min=914, max=157871, avg=74815.94, stdev=28896.04 00:21:16.139 clat percentiles (msec): 00:21:16.139 | 1.00th=[ 3], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 50], 00:21:16.139 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:21:16.139 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 121], 95.00th=[ 129], 00:21:16.139 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:21:16.139 | 99.99th=[ 159] 00:21:16.139 bw ( KiB/s): min= 608, max= 1024, per=4.33%, avg=816.05, stdev=152.95, samples=19 00:21:16.139 iops : min= 152, max= 256, avg=204.00, stdev=38.24, samples=19 00:21:16.139 lat (usec) : 1000=0.28% 00:21:16.139 lat (msec) : 2=0.28%, 4=1.31%, 10=2.01%, 50=17.18%, 100=62.64% 00:21:16.139 lat (msec) : 250=16.29% 00:21:16.139 cpu : usr=35.96%, sys=2.42%, ctx=1060, majf=0, minf=9 00:21:16.139 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=83.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:16.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.139 issued rwts: total=2136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.139 filename1: (groupid=0, jobs=1): err= 0: pid=83605: Mon Jul 15 12:45:46 2024 00:21:16.139 read: IOPS=196, BW=787KiB/s (806kB/s)(7888KiB/10025msec) 00:21:16.139 slat (usec): min=5, max=8031, avg=36.82, stdev=347.37 00:21:16.139 clat (msec): min=37, max=158, avg=81.06, stdev=24.33 00:21:16.139 lat (msec): min=37, max=158, avg=81.10, stdev=24.33 00:21:16.139 clat percentiles (msec): 00:21:16.139 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:21:16.139 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:21:16.139 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 121], 95.00th=[ 130], 00:21:16.139 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 159], 99.95th=[ 159], 00:21:16.139 | 99.99th=[ 159] 00:21:16.139 bw ( KiB/s): min= 536, max= 961, per=4.16%, avg=784.85, stdev=137.26, samples=20 00:21:16.139 iops : min= 134, max= 240, avg=196.20, stdev=34.30, samples=20 00:21:16.140 lat (msec) : 50=11.26%, 100=69.07%, 250=19.68% 00:21:16.140 cpu : usr=37.57%, sys=2.12%, ctx=1193, majf=0, minf=9 00:21:16.140 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 issued rwts: total=1972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.140 filename1: (groupid=0, jobs=1): err= 0: pid=83606: Mon Jul 15 12:45:46 2024 00:21:16.140 read: IOPS=192, BW=770KiB/s (789kB/s)(7724KiB/10030msec) 00:21:16.140 slat (usec): min=3, max=8032, avg=24.56, stdev=258.07 00:21:16.140 clat (msec): min=29, max=172, avg=82.94, stdev=25.21 00:21:16.140 lat (msec): min=29, max=172, avg=82.96, stdev=25.21 00:21:16.140 clat percentiles (msec): 00:21:16.140 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:21:16.140 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:21:16.140 | 70.00th=[ 94], 80.00th=[ 106], 90.00th=[ 122], 95.00th=[ 132], 00:21:16.140 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 174], 99.95th=[ 174], 00:21:16.140 | 99.99th=[ 174] 00:21:16.140 bw ( KiB/s): min= 568, max= 968, per=4.07%, avg=767.65, stdev=135.62, samples=20 00:21:16.140 iops : min= 142, max= 242, avg=191.90, stdev=33.91, samples=20 00:21:16.140 lat (msec) : 50=10.62%, 100=68.00%, 250=21.39% 00:21:16.140 cpu : usr=31.06%, sys=2.03%, ctx=895, majf=0, minf=9 00:21:16.140 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=77.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:21:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 issued rwts: total=1931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.140 filename1: (groupid=0, jobs=1): err= 0: pid=83607: Mon Jul 15 12:45:46 2024 00:21:16.140 read: IOPS=194, BW=778KiB/s (796kB/s)(7804KiB/10034msec) 00:21:16.140 slat (usec): min=4, max=8021, avg=23.88, stdev=222.16 00:21:16.140 clat (msec): min=38, max=157, avg=82.08, stdev=23.96 00:21:16.140 lat (msec): min=38, max=157, avg=82.10, stdev=23.96 00:21:16.140 clat percentiles (msec): 00:21:16.140 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 62], 00:21:16.140 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:21:16.140 | 70.00th=[ 90], 80.00th=[ 103], 90.00th=[ 120], 95.00th=[ 130], 00:21:16.140 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 159], 00:21:16.140 | 99.99th=[ 159] 00:21:16.140 bw ( KiB/s): min= 536, max= 1024, per=4.12%, avg=775.70, stdev=133.72, samples=20 00:21:16.140 iops : min= 134, max= 256, avg=193.90, stdev=33.43, samples=20 00:21:16.140 lat (msec) : 50=8.82%, 100=70.63%, 250=20.55% 00:21:16.140 cpu : usr=38.46%, sys=2.40%, ctx=1192, majf=0, minf=9 00:21:16.140 IO depths : 1=0.1%, 2=1.6%, 4=6.5%, 8=76.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 complete : 0=0.0%, 4=88.7%, 8=9.8%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 issued rwts: total=1951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.140 filename1: (groupid=0, jobs=1): err= 0: pid=83608: Mon Jul 15 12:45:46 2024 00:21:16.140 read: IOPS=195, BW=784KiB/s (803kB/s)(7872KiB/10042msec) 00:21:16.140 slat (nsec): min=8147, max=55443, avg=14971.36, stdev=6724.49 00:21:16.140 clat (msec): min=35, max=159, avg=81.47, stdev=25.34 00:21:16.140 lat (msec): min=35, max=159, avg=81.49, stdev=25.34 00:21:16.140 clat percentiles (msec): 00:21:16.140 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:21:16.140 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 84], 00:21:16.140 | 70.00th=[ 89], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 130], 00:21:16.140 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:21:16.140 | 99.99th=[ 159] 00:21:16.140 bw ( KiB/s): min= 512, max= 1000, per=4.15%, avg=782.65, stdev=152.29, samples=20 00:21:16.140 iops : min= 128, max= 250, avg=195.60, stdev=38.06, samples=20 00:21:16.140 lat (msec) : 50=11.43%, 100=67.28%, 250=21.29% 00:21:16.140 cpu : usr=35.18%, sys=2.10%, ctx=980, majf=0, minf=9 00:21:16.140 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.140 filename1: (groupid=0, jobs=1): err= 0: pid=83609: Mon Jul 15 12:45:46 2024 00:21:16.140 read: IOPS=196, BW=788KiB/s (807kB/s)(7916KiB/10046msec) 00:21:16.140 slat (usec): min=6, max=8050, avg=25.81, stdev=270.71 00:21:16.140 clat (msec): min=2, max=202, avg=81.00, stdev=30.11 00:21:16.140 lat (msec): min=2, max=202, avg=81.03, stdev=30.12 00:21:16.140 clat percentiles (msec): 00:21:16.140 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 49], 20.00th=[ 60], 00:21:16.140 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:21:16.140 | 70.00th=[ 94], 80.00th=[ 107], 90.00th=[ 121], 95.00th=[ 131], 00:21:16.140 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 203], 00:21:16.140 | 99.99th=[ 203] 00:21:16.140 bw ( KiB/s): min= 488, max= 1784, per=4.16%, avg=784.50, stdev=265.50, samples=20 00:21:16.140 iops : min= 122, max= 446, avg=196.10, stdev=66.38, samples=20 00:21:16.140 lat (msec) : 4=2.32%, 10=2.43%, 20=0.81%, 50=6.37%, 100=65.18% 00:21:16.140 lat (msec) : 250=22.89% 00:21:16.140 cpu : usr=35.80%, sys=2.65%, ctx=1041, majf=0, minf=9 00:21:16.140 IO depths : 1=0.2%, 2=2.1%, 4=7.6%, 8=74.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 complete : 0=0.0%, 4=89.6%, 8=8.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.140 filename1: (groupid=0, jobs=1): err= 0: pid=83610: Mon Jul 15 12:45:46 2024 00:21:16.140 read: IOPS=198, BW=794KiB/s (813kB/s)(7960KiB/10030msec) 00:21:16.140 slat (usec): min=3, max=4033, avg=20.51, stdev=90.49 00:21:16.140 clat (msec): min=38, max=156, avg=80.48, stdev=24.80 00:21:16.140 lat (msec): min=38, max=156, avg=80.50, stdev=24.80 00:21:16.140 clat percentiles (msec): 00:21:16.140 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 57], 00:21:16.140 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 82], 00:21:16.140 | 70.00th=[ 86], 80.00th=[ 101], 90.00th=[ 121], 95.00th=[ 132], 00:21:16.140 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:21:16.140 | 99.99th=[ 157] 00:21:16.140 bw ( KiB/s): min= 536, max= 1024, per=4.20%, avg=791.20, stdev=148.30, samples=20 00:21:16.140 iops : min= 134, max= 256, avg=197.75, stdev=37.01, samples=20 00:21:16.140 lat (msec) : 50=13.07%, 100=66.98%, 250=19.95% 00:21:16.140 cpu : usr=36.29%, sys=2.31%, ctx=1004, majf=0, minf=9 00:21:16.140 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.140 filename1: (groupid=0, jobs=1): err= 0: pid=83611: Mon Jul 15 12:45:46 2024 00:21:16.140 read: IOPS=197, BW=790KiB/s (809kB/s)(7920KiB/10025msec) 00:21:16.140 slat (usec): min=4, max=8027, avg=22.18, stdev=201.48 00:21:16.140 clat (msec): min=19, max=160, avg=80.83, stdev=25.78 00:21:16.140 lat (msec): min=19, max=160, avg=80.85, stdev=25.77 00:21:16.140 clat percentiles (msec): 00:21:16.140 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 59], 00:21:16.140 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:21:16.140 | 70.00th=[ 90], 80.00th=[ 103], 90.00th=[ 122], 95.00th=[ 132], 00:21:16.140 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 161], 00:21:16.140 | 99.99th=[ 161] 00:21:16.140 bw ( KiB/s): min= 512, max= 1000, per=4.18%, avg=787.45, stdev=152.91, samples=20 00:21:16.140 iops : min= 128, max= 250, avg=196.80, stdev=38.20, samples=20 00:21:16.140 lat (msec) : 20=0.25%, 50=9.80%, 100=68.79%, 250=21.16% 00:21:16.140 cpu : usr=38.76%, sys=2.65%, ctx=1417, majf=0, minf=9 00:21:16.140 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 issued rwts: total=1980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.140 filename1: (groupid=0, jobs=1): err= 0: pid=83612: Mon Jul 15 12:45:46 2024 00:21:16.140 read: IOPS=183, BW=733KiB/s (751kB/s)(7356KiB/10036msec) 00:21:16.140 slat (usec): min=4, max=8027, avg=23.97, stdev=265.32 00:21:16.140 clat (msec): min=8, max=162, avg=87.09, stdev=26.78 00:21:16.140 lat (msec): min=8, max=162, avg=87.11, stdev=26.78 00:21:16.140 clat percentiles (msec): 00:21:16.140 | 1.00th=[ 12], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 69], 00:21:16.140 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 89], 00:21:16.140 | 70.00th=[ 96], 80.00th=[ 115], 90.00th=[ 130], 95.00th=[ 132], 00:21:16.140 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 163], 00:21:16.140 | 99.99th=[ 163] 00:21:16.140 bw ( KiB/s): min= 528, max= 1264, per=3.87%, avg=729.20, stdev=165.73, samples=20 00:21:16.140 iops : min= 132, max= 316, avg=182.30, stdev=41.43, samples=20 00:21:16.140 lat (msec) : 10=0.76%, 20=0.98%, 50=4.46%, 100=66.88%, 250=26.92% 00:21:16.140 cpu : usr=32.38%, sys=2.08%, ctx=1040, majf=0, minf=9 00:21:16.140 IO depths : 1=0.1%, 2=2.1%, 4=8.5%, 8=73.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 complete : 0=0.0%, 4=89.9%, 8=8.2%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.140 issued rwts: total=1839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.140 filename2: (groupid=0, jobs=1): err= 0: pid=83613: Mon Jul 15 12:45:46 2024 00:21:16.140 read: IOPS=177, BW=710KiB/s (727kB/s)(7128KiB/10034msec) 00:21:16.140 slat (usec): min=4, max=8043, avg=36.33, stdev=397.99 00:21:16.140 clat (msec): min=42, max=192, avg=89.83, stdev=28.71 00:21:16.140 lat (msec): min=42, max=192, avg=89.87, stdev=28.70 00:21:16.140 clat percentiles (msec): 00:21:16.140 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 71], 00:21:16.140 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 86], 00:21:16.140 | 70.00th=[ 106], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 132], 00:21:16.140 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:21:16.140 | 99.99th=[ 192] 00:21:16.140 bw ( KiB/s): min= 440, max= 1000, per=3.76%, avg=708.10, stdev=173.52, samples=20 00:21:16.140 iops : min= 110, max= 250, avg=177.00, stdev=43.37, samples=20 00:21:16.140 lat (msec) : 50=7.24%, 100=59.48%, 250=33.28% 00:21:16.140 cpu : usr=32.78%, sys=1.77%, ctx=952, majf=0, minf=9 00:21:16.140 IO depths : 1=0.1%, 2=3.8%, 4=15.2%, 8=66.9%, 16=14.0%, 32=0.0%, >=64=0.0% 00:21:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 complete : 0=0.0%, 4=91.5%, 8=5.2%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 issued rwts: total=1782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.141 filename2: (groupid=0, jobs=1): err= 0: pid=83614: Mon Jul 15 12:45:46 2024 00:21:16.141 read: IOPS=202, BW=810KiB/s (830kB/s)(8128KiB/10032msec) 00:21:16.141 slat (usec): min=3, max=8026, avg=27.90, stdev=253.68 00:21:16.141 clat (msec): min=7, max=159, avg=78.77, stdev=26.75 00:21:16.141 lat (msec): min=7, max=159, avg=78.80, stdev=26.75 00:21:16.141 clat percentiles (msec): 00:21:16.141 | 1.00th=[ 9], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 57], 00:21:16.141 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:21:16.141 | 70.00th=[ 87], 80.00th=[ 97], 90.00th=[ 121], 95.00th=[ 129], 00:21:16.141 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 161], 00:21:16.141 | 99.99th=[ 161] 00:21:16.141 bw ( KiB/s): min= 560, max= 1171, per=4.29%, avg=808.95, stdev=167.14, samples=20 00:21:16.141 iops : min= 140, max= 292, avg=202.20, stdev=41.70, samples=20 00:21:16.141 lat (msec) : 10=1.38%, 20=0.79%, 50=10.83%, 100=67.03%, 250=19.98% 00:21:16.141 cpu : usr=40.63%, sys=2.89%, ctx=1187, majf=0, minf=9 00:21:16.141 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.141 filename2: (groupid=0, jobs=1): err= 0: pid=83615: Mon Jul 15 12:45:46 2024 00:21:16.141 read: IOPS=198, BW=793KiB/s (812kB/s)(7964KiB/10038msec) 00:21:16.141 slat (usec): min=7, max=8029, avg=25.01, stdev=230.25 00:21:16.141 clat (msec): min=25, max=170, avg=80.47, stdev=26.89 00:21:16.141 lat (msec): min=25, max=171, avg=80.49, stdev=26.89 00:21:16.141 clat percentiles (msec): 00:21:16.141 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 56], 00:21:16.141 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:21:16.141 | 70.00th=[ 90], 80.00th=[ 106], 90.00th=[ 123], 95.00th=[ 132], 00:21:16.141 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 171], 00:21:16.141 | 99.99th=[ 171] 00:21:16.141 bw ( KiB/s): min= 504, max= 1024, per=4.20%, avg=791.80, stdev=177.72, samples=20 00:21:16.141 iops : min= 126, max= 256, avg=197.90, stdev=44.41, samples=20 00:21:16.141 lat (msec) : 50=12.76%, 100=65.09%, 250=22.15% 00:21:16.141 cpu : usr=36.11%, sys=2.76%, ctx=1325, majf=0, minf=9 00:21:16.141 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 issued rwts: total=1991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.141 filename2: (groupid=0, jobs=1): err= 0: pid=83616: Mon Jul 15 12:45:46 2024 00:21:16.141 read: IOPS=187, BW=749KiB/s (767kB/s)(7516KiB/10038msec) 00:21:16.141 slat (usec): min=7, max=12047, avg=35.01, stdev=402.88 00:21:16.141 clat (msec): min=42, max=153, avg=85.26, stdev=23.57 00:21:16.141 lat (msec): min=42, max=153, avg=85.30, stdev=23.56 00:21:16.141 clat percentiles (msec): 00:21:16.141 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 56], 20.00th=[ 70], 00:21:16.141 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 85], 00:21:16.141 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 124], 95.00th=[ 131], 00:21:16.141 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:21:16.141 | 99.99th=[ 155] 00:21:16.141 bw ( KiB/s): min= 560, max= 944, per=3.96%, avg=745.00, stdev=111.76, samples=20 00:21:16.141 iops : min= 140, max= 236, avg=186.20, stdev=27.92, samples=20 00:21:16.141 lat (msec) : 50=6.65%, 100=71.05%, 250=22.30% 00:21:16.141 cpu : usr=35.99%, sys=2.16%, ctx=1004, majf=0, minf=9 00:21:16.141 IO depths : 1=0.1%, 2=1.5%, 4=6.4%, 8=76.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 issued rwts: total=1879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.141 filename2: (groupid=0, jobs=1): err= 0: pid=83617: Mon Jul 15 12:45:46 2024 00:21:16.141 read: IOPS=202, BW=811KiB/s (831kB/s)(8132KiB/10024msec) 00:21:16.141 slat (usec): min=4, max=12032, avg=25.95, stdev=294.75 00:21:16.141 clat (msec): min=24, max=158, avg=78.77, stdev=25.07 00:21:16.141 lat (msec): min=24, max=159, avg=78.79, stdev=25.07 00:21:16.141 clat percentiles (msec): 00:21:16.141 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:21:16.141 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:21:16.141 | 70.00th=[ 87], 80.00th=[ 97], 90.00th=[ 121], 95.00th=[ 129], 00:21:16.141 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 159], 99.95th=[ 159], 00:21:16.141 | 99.99th=[ 159] 00:21:16.141 bw ( KiB/s): min= 568, max= 976, per=4.28%, avg=806.80, stdev=148.13, samples=20 00:21:16.141 iops : min= 142, max= 244, avg=201.70, stdev=37.03, samples=20 00:21:16.141 lat (msec) : 50=12.54%, 100=68.47%, 250=18.99% 00:21:16.141 cpu : usr=40.09%, sys=2.35%, ctx=1232, majf=0, minf=9 00:21:16.141 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 issued rwts: total=2033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.141 filename2: (groupid=0, jobs=1): err= 0: pid=83618: Mon Jul 15 12:45:46 2024 00:21:16.141 read: IOPS=197, BW=790KiB/s (809kB/s)(7928KiB/10035msec) 00:21:16.141 slat (usec): min=3, max=8026, avg=27.02, stdev=254.62 00:21:16.141 clat (msec): min=31, max=165, avg=80.82, stdev=24.89 00:21:16.141 lat (msec): min=32, max=165, avg=80.85, stdev=24.89 00:21:16.141 clat percentiles (msec): 00:21:16.141 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 60], 00:21:16.141 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:21:16.141 | 70.00th=[ 89], 80.00th=[ 102], 90.00th=[ 121], 95.00th=[ 129], 00:21:16.141 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 167], 00:21:16.141 | 99.99th=[ 167] 00:21:16.141 bw ( KiB/s): min= 544, max= 1024, per=4.17%, avg=786.15, stdev=141.16, samples=20 00:21:16.141 iops : min= 136, max= 256, avg=196.50, stdev=35.30, samples=20 00:21:16.141 lat (msec) : 50=11.65%, 100=66.95%, 250=21.39% 00:21:16.141 cpu : usr=39.94%, sys=2.42%, ctx=1144, majf=0, minf=9 00:21:16.141 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 issued rwts: total=1982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.141 filename2: (groupid=0, jobs=1): err= 0: pid=83619: Mon Jul 15 12:45:46 2024 00:21:16.141 read: IOPS=184, BW=738KiB/s (756kB/s)(7392KiB/10018msec) 00:21:16.141 slat (usec): min=4, max=8146, avg=29.50, stdev=248.86 00:21:16.141 clat (msec): min=25, max=179, avg=86.57, stdev=28.27 00:21:16.141 lat (msec): min=25, max=179, avg=86.59, stdev=28.26 00:21:16.141 clat percentiles (msec): 00:21:16.141 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 68], 00:21:16.141 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 86], 00:21:16.141 | 70.00th=[ 97], 80.00th=[ 115], 90.00th=[ 131], 95.00th=[ 140], 00:21:16.141 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:21:16.141 | 99.99th=[ 180] 00:21:16.141 bw ( KiB/s): min= 496, max= 1000, per=3.89%, avg=732.60, stdev=171.02, samples=20 00:21:16.141 iops : min= 124, max= 250, avg=183.15, stdev=42.75, samples=20 00:21:16.141 lat (msec) : 50=8.87%, 100=62.93%, 250=28.19% 00:21:16.141 cpu : usr=38.13%, sys=2.86%, ctx=1240, majf=0, minf=9 00:21:16.141 IO depths : 1=0.1%, 2=3.2%, 4=12.9%, 8=69.8%, 16=14.0%, 32=0.0%, >=64=0.0% 00:21:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 complete : 0=0.0%, 4=90.6%, 8=6.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.141 filename2: (groupid=0, jobs=1): err= 0: pid=83620: Mon Jul 15 12:45:46 2024 00:21:16.141 read: IOPS=183, BW=734KiB/s (751kB/s)(7364KiB/10038msec) 00:21:16.141 slat (usec): min=7, max=8039, avg=34.05, stdev=302.20 00:21:16.141 clat (msec): min=16, max=179, avg=86.97, stdev=28.26 00:21:16.141 lat (msec): min=16, max=179, avg=87.00, stdev=28.26 00:21:16.141 clat percentiles (msec): 00:21:16.141 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 68], 00:21:16.141 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 86], 00:21:16.141 | 70.00th=[ 100], 80.00th=[ 113], 90.00th=[ 128], 95.00th=[ 134], 00:21:16.141 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 180], 00:21:16.141 | 99.99th=[ 180] 00:21:16.141 bw ( KiB/s): min= 416, max= 1008, per=3.87%, avg=729.50, stdev=178.70, samples=20 00:21:16.141 iops : min= 104, max= 252, avg=182.30, stdev=44.66, samples=20 00:21:16.141 lat (msec) : 20=0.76%, 50=6.84%, 100=63.34%, 250=29.06% 00:21:16.141 cpu : usr=41.24%, sys=2.54%, ctx=1562, majf=0, minf=9 00:21:16.141 IO depths : 1=0.1%, 2=3.5%, 4=14.0%, 8=68.4%, 16=14.1%, 32=0.0%, >=64=0.0% 00:21:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 complete : 0=0.0%, 4=91.1%, 8=5.9%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.141 issued rwts: total=1841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.141 00:21:16.141 Run status group 0 (all jobs): 00:21:16.141 READ: bw=18.4MiB/s (19.3MB/s), 710KiB/s-859KiB/s (727kB/s-879kB/s), io=185MiB (194MB), run=10001-10051msec 00:21:16.141 12:45:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:16.141 12:45:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:16.141 12:45:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:16.141 12:45:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:16.141 12:45:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:16.141 12:45:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:16.141 12:45:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.141 12:45:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.141 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.141 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:16.141 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.141 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.141 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.141 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:16.141 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 bdev_null0 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 [2024-07-15 12:45:47.079465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 bdev_null1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.142 { 00:21:16.142 "params": { 00:21:16.142 "name": "Nvme$subsystem", 00:21:16.142 "trtype": "$TEST_TRANSPORT", 00:21:16.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.142 "adrfam": "ipv4", 00:21:16.142 "trsvcid": "$NVMF_PORT", 00:21:16.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.142 "hdgst": ${hdgst:-false}, 00:21:16.142 "ddgst": ${ddgst:-false} 00:21:16.142 }, 00:21:16.142 "method": "bdev_nvme_attach_controller" 00:21:16.142 } 00:21:16.142 EOF 00:21:16.142 )") 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.142 { 00:21:16.142 "params": { 00:21:16.142 "name": "Nvme$subsystem", 00:21:16.142 "trtype": "$TEST_TRANSPORT", 00:21:16.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.142 "adrfam": "ipv4", 00:21:16.142 "trsvcid": "$NVMF_PORT", 00:21:16.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.142 "hdgst": ${hdgst:-false}, 00:21:16.142 "ddgst": ${ddgst:-false} 00:21:16.142 }, 00:21:16.142 "method": "bdev_nvme_attach_controller" 00:21:16.142 } 00:21:16.142 EOF 00:21:16.142 )") 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:16.142 12:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:16.142 "params": { 00:21:16.142 "name": "Nvme0", 00:21:16.142 "trtype": "tcp", 00:21:16.142 "traddr": "10.0.0.2", 00:21:16.142 "adrfam": "ipv4", 00:21:16.142 "trsvcid": "4420", 00:21:16.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:16.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:16.142 "hdgst": false, 00:21:16.142 "ddgst": false 00:21:16.142 }, 00:21:16.142 "method": "bdev_nvme_attach_controller" 00:21:16.142 },{ 00:21:16.142 "params": { 00:21:16.142 "name": "Nvme1", 00:21:16.142 "trtype": "tcp", 00:21:16.142 "traddr": "10.0.0.2", 00:21:16.142 "adrfam": "ipv4", 00:21:16.142 "trsvcid": "4420", 00:21:16.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.142 "hdgst": false, 00:21:16.142 "ddgst": false 00:21:16.142 }, 00:21:16.142 "method": "bdev_nvme_attach_controller" 00:21:16.143 }' 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:16.143 12:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:16.143 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:16.143 ... 00:21:16.143 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:16.143 ... 00:21:16.143 fio-3.35 00:21:16.143 Starting 4 threads 00:21:20.327 00:21:20.327 filename0: (groupid=0, jobs=1): err= 0: pid=83756: Mon Jul 15 12:45:52 2024 00:21:20.327 read: IOPS=2026, BW=15.8MiB/s (16.6MB/s)(79.2MiB/5002msec) 00:21:20.327 slat (usec): min=5, max=206, avg=18.28, stdev= 8.66 00:21:20.327 clat (usec): min=1060, max=7447, avg=3898.47, stdev=996.83 00:21:20.327 lat (usec): min=1069, max=7483, avg=3916.75, stdev=996.79 00:21:20.327 clat percentiles (usec): 00:21:20.327 | 1.00th=[ 1696], 5.00th=[ 2343], 10.00th=[ 2573], 20.00th=[ 3130], 00:21:20.327 | 30.00th=[ 3326], 40.00th=[ 3425], 50.00th=[ 3785], 60.00th=[ 4047], 00:21:20.327 | 70.00th=[ 4686], 80.00th=[ 5014], 90.00th=[ 5211], 95.00th=[ 5342], 00:21:20.327 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 6456], 99.95th=[ 6652], 00:21:20.327 | 99.99th=[ 7111] 00:21:20.327 bw ( KiB/s): min=12912, max=17843, per=24.83%, avg=16126.56, stdev=1311.94, samples=9 00:21:20.327 iops : min= 1614, max= 2230, avg=2015.78, stdev=163.93, samples=9 00:21:20.327 lat (msec) : 2=1.67%, 4=57.86%, 10=40.48% 00:21:20.327 cpu : usr=91.84%, sys=6.82%, ctx=61, majf=0, minf=0 00:21:20.327 IO depths : 1=0.1%, 2=4.4%, 4=64.2%, 8=31.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:20.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.328 complete : 0=0.0%, 4=98.3%, 8=1.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.328 issued rwts: total=10139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.328 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:20.328 filename0: (groupid=0, jobs=1): err= 0: pid=83757: Mon Jul 15 12:45:52 2024 00:21:20.328 read: IOPS=2001, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5001msec) 00:21:20.328 slat (nsec): min=4858, max=65327, avg=19376.31, stdev=9703.88 00:21:20.328 clat (usec): min=984, max=7166, avg=3942.78, stdev=1031.64 00:21:20.328 lat (usec): min=992, max=7188, avg=3962.15, stdev=1031.15 00:21:20.328 clat percentiles (usec): 00:21:20.328 | 1.00th=[ 1844], 5.00th=[ 2147], 10.00th=[ 2474], 20.00th=[ 3195], 00:21:20.328 | 30.00th=[ 3359], 40.00th=[ 3458], 50.00th=[ 3851], 60.00th=[ 4293], 00:21:20.328 | 70.00th=[ 4686], 80.00th=[ 5014], 90.00th=[ 5211], 95.00th=[ 5342], 00:21:20.328 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6456], 99.95th=[ 6521], 00:21:20.328 | 99.99th=[ 6915] 00:21:20.328 bw ( KiB/s): min=14208, max=19008, per=24.70%, avg=16042.67, stdev=1419.36, samples=9 00:21:20.328 iops : min= 1776, max= 2376, avg=2005.33, stdev=177.42, samples=9 00:21:20.328 lat (usec) : 1000=0.05% 00:21:20.328 lat (msec) : 2=3.02%, 4=54.41%, 10=42.52% 00:21:20.328 cpu : usr=92.44%, sys=6.64%, ctx=5, majf=0, minf=9 00:21:20.328 IO depths : 1=0.1%, 2=5.9%, 4=63.7%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:20.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.328 complete : 0=0.0%, 4=97.7%, 8=2.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.328 issued rwts: total=10012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.328 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:20.328 filename1: (groupid=0, jobs=1): err= 0: pid=83758: Mon Jul 15 12:45:52 2024 00:21:20.328 read: IOPS=2136, BW=16.7MiB/s (17.5MB/s)(83.5MiB/5003msec) 00:21:20.328 slat (nsec): min=7534, max=61910, avg=12902.44, stdev=6707.32 00:21:20.328 clat (usec): min=978, max=7210, avg=3709.94, stdev=1001.78 00:21:20.328 lat (usec): min=988, max=7233, avg=3722.84, stdev=1001.12 00:21:20.328 clat percentiles (usec): 00:21:20.328 | 1.00th=[ 1401], 5.00th=[ 2040], 10.00th=[ 2442], 20.00th=[ 2802], 00:21:20.328 | 30.00th=[ 3294], 40.00th=[ 3392], 50.00th=[ 3523], 60.00th=[ 3851], 00:21:20.328 | 70.00th=[ 4359], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5276], 00:21:20.328 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 6390], 99.95th=[ 6652], 00:21:20.328 | 99.99th=[ 7046] 00:21:20.328 bw ( KiB/s): min=16272, max=19072, per=26.37%, avg=17128.89, stdev=996.48, samples=9 00:21:20.328 iops : min= 2034, max= 2384, avg=2141.11, stdev=124.56, samples=9 00:21:20.328 lat (usec) : 1000=0.05% 00:21:20.328 lat (msec) : 2=4.20%, 4=61.91%, 10=33.84% 00:21:20.328 cpu : usr=92.42%, sys=6.54%, ctx=41, majf=0, minf=0 00:21:20.328 IO depths : 1=0.1%, 2=2.5%, 4=65.9%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:20.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.328 complete : 0=0.0%, 4=99.1%, 8=0.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.328 issued rwts: total=10688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.328 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:20.328 filename1: (groupid=0, jobs=1): err= 0: pid=83759: Mon Jul 15 12:45:52 2024 00:21:20.328 read: IOPS=1955, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5001msec) 00:21:20.328 slat (usec): min=4, max=120, avg=20.30, stdev= 8.98 00:21:20.328 clat (usec): min=1370, max=7461, avg=4034.17, stdev=953.22 00:21:20.328 lat (usec): min=1385, max=7486, avg=4054.48, stdev=952.25 00:21:20.328 clat percentiles (usec): 00:21:20.328 | 1.00th=[ 2057], 5.00th=[ 2442], 10.00th=[ 2737], 20.00th=[ 3294], 00:21:20.328 | 30.00th=[ 3359], 40.00th=[ 3490], 50.00th=[ 3884], 60.00th=[ 4555], 00:21:20.328 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5276], 95.00th=[ 5407], 00:21:20.328 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 6325], 99.95th=[ 6915], 00:21:20.328 | 99.99th=[ 7439] 00:21:20.328 bw ( KiB/s): min=12880, max=16576, per=24.09%, avg=15648.00, stdev=1234.54, samples=9 00:21:20.328 iops : min= 1610, max= 2072, avg=1956.00, stdev=154.32, samples=9 00:21:20.328 lat (msec) : 2=0.55%, 4=53.54%, 10=45.91% 00:21:20.328 cpu : usr=92.86%, sys=6.12%, ctx=29, majf=0, minf=10 00:21:20.328 IO depths : 1=0.1%, 2=6.8%, 4=62.9%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:20.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.328 complete : 0=0.0%, 4=97.4%, 8=2.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.328 issued rwts: total=9781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.328 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:20.328 00:21:20.328 Run status group 0 (all jobs): 00:21:20.328 READ: bw=63.4MiB/s (66.5MB/s), 15.3MiB/s-16.7MiB/s (16.0MB/s-17.5MB/s), io=317MiB (333MB), run=5001-5003msec 00:21:20.585 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.586 00:21:20.586 real 0m23.627s 00:21:20.586 user 2m3.045s 00:21:20.586 sys 0m9.044s 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:20.586 ************************************ 00:21:20.586 12:45:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.586 END TEST fio_dif_rand_params 00:21:20.586 ************************************ 00:21:20.586 12:45:53 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:20.586 12:45:53 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:20.843 12:45:53 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:20.843 12:45:53 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.843 12:45:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:20.843 ************************************ 00:21:20.843 START TEST fio_dif_digest 00:21:20.843 ************************************ 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:20.843 bdev_null0 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:20.843 [2024-07-15 12:45:53.314384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:20.843 12:45:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.843 { 00:21:20.843 "params": { 00:21:20.843 "name": "Nvme$subsystem", 00:21:20.844 "trtype": "$TEST_TRANSPORT", 00:21:20.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.844 "adrfam": "ipv4", 00:21:20.844 "trsvcid": "$NVMF_PORT", 00:21:20.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.844 "hdgst": ${hdgst:-false}, 00:21:20.844 "ddgst": ${ddgst:-false} 00:21:20.844 }, 00:21:20.844 "method": "bdev_nvme_attach_controller" 00:21:20.844 } 00:21:20.844 EOF 00:21:20.844 )") 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:20.844 "params": { 00:21:20.844 "name": "Nvme0", 00:21:20.844 "trtype": "tcp", 00:21:20.844 "traddr": "10.0.0.2", 00:21:20.844 "adrfam": "ipv4", 00:21:20.844 "trsvcid": "4420", 00:21:20.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:20.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:20.844 "hdgst": true, 00:21:20.844 "ddgst": true 00:21:20.844 }, 00:21:20.844 "method": "bdev_nvme_attach_controller" 00:21:20.844 }' 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:20.844 12:45:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:20.844 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:20.844 ... 00:21:20.844 fio-3.35 00:21:20.844 Starting 3 threads 00:21:33.037 00:21:33.037 filename0: (groupid=0, jobs=1): err= 0: pid=83861: Mon Jul 15 12:46:04 2024 00:21:33.037 read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(282MiB/10009msec) 00:21:33.037 slat (nsec): min=7973, max=55220, avg=17012.38, stdev=8111.00 00:21:33.037 clat (usec): min=13102, max=17344, avg=13283.18, stdev=212.41 00:21:33.037 lat (usec): min=13116, max=17376, avg=13300.19, stdev=213.21 00:21:33.037 clat percentiles (usec): 00:21:33.037 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:33.037 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:21:33.037 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13304], 95.00th=[13435], 00:21:33.037 | 99.00th=[13829], 99.50th=[13829], 99.90th=[17433], 99.95th=[17433], 00:21:33.037 | 99.99th=[17433] 00:21:33.037 bw ( KiB/s): min=28416, max=29184, per=33.32%, avg=28802.80, stdev=391.29, samples=20 00:21:33.037 iops : min= 222, max= 228, avg=225.00, stdev= 3.08, samples=20 00:21:33.037 lat (msec) : 20=100.00% 00:21:33.037 cpu : usr=91.56%, sys=7.87%, ctx=11, majf=0, minf=0 00:21:33.037 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:33.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.037 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.037 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:33.037 filename0: (groupid=0, jobs=1): err= 0: pid=83862: Mon Jul 15 12:46:04 2024 00:21:33.037 read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(282MiB/10010msec) 00:21:33.037 slat (nsec): min=7929, max=58795, avg=15921.03, stdev=7533.29 00:21:33.037 clat (usec): min=11965, max=16876, avg=13289.06, stdev=243.98 00:21:33.037 lat (usec): min=11973, max=16891, avg=13304.98, stdev=244.68 00:21:33.037 clat percentiles (usec): 00:21:33.037 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:33.037 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:21:33.037 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:21:33.037 | 99.00th=[13829], 99.50th=[14615], 99.90th=[16909], 99.95th=[16909], 00:21:33.037 | 99.99th=[16909] 00:21:33.037 bw ( KiB/s): min=28416, max=29184, per=33.32%, avg=28800.00, stdev=393.98, samples=20 00:21:33.037 iops : min= 222, max= 228, avg=225.00, stdev= 3.08, samples=20 00:21:33.037 lat (msec) : 20=100.00% 00:21:33.037 cpu : usr=91.63%, sys=7.69%, ctx=59, majf=0, minf=0 00:21:33.037 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:33.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.037 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.037 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:33.037 filename0: (groupid=0, jobs=1): err= 0: pid=83863: Mon Jul 15 12:46:04 2024 00:21:33.037 read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(282MiB/10008msec) 00:21:33.037 slat (nsec): min=7979, max=49390, avg=14969.77, stdev=6997.29 00:21:33.037 clat (usec): min=10588, max=17361, avg=13287.27, stdev=257.30 00:21:33.037 lat (usec): min=10597, max=17381, avg=13302.24, stdev=257.67 00:21:33.037 clat percentiles (usec): 00:21:33.037 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:33.037 | 30.00th=[13304], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:21:33.037 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13304], 95.00th=[13435], 00:21:33.037 | 99.00th=[13829], 99.50th=[13829], 99.90th=[17433], 99.95th=[17433], 00:21:33.037 | 99.99th=[17433] 00:21:33.037 bw ( KiB/s): min=28302, max=29184, per=33.34%, avg=28814.21, stdev=401.27, samples=19 00:21:33.037 iops : min= 221, max= 228, avg=225.11, stdev= 3.14, samples=19 00:21:33.037 lat (msec) : 20=100.00% 00:21:33.037 cpu : usr=90.57%, sys=8.82%, ctx=7, majf=0, minf=0 00:21:33.037 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:33.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.037 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.037 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:33.037 00:21:33.037 Run status group 0 (all jobs): 00:21:33.037 READ: bw=84.4MiB/s (88.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=845MiB (886MB), run=10008-10010msec 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.037 00:21:33.037 real 0m11.015s 00:21:33.037 user 0m28.055s 00:21:33.037 sys 0m2.727s 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.037 12:46:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:33.037 ************************************ 00:21:33.037 END TEST fio_dif_digest 00:21:33.037 ************************************ 00:21:33.037 12:46:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:33.038 12:46:04 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:33.038 12:46:04 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.038 rmmod nvme_tcp 00:21:33.038 rmmod nvme_fabrics 00:21:33.038 rmmod nvme_keyring 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83121 ']' 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83121 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83121 ']' 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83121 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83121 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:33.038 killing process with pid 83121 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83121' 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83121 00:21:33.038 12:46:04 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83121 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:33.038 12:46:04 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:33.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:33.038 Waiting for block devices as requested 00:21:33.038 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.038 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.038 12:46:05 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.038 12:46:05 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.038 12:46:05 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.038 12:46:05 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.038 12:46:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.038 12:46:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:33.038 12:46:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.038 12:46:05 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:33.038 00:21:33.038 real 0m59.922s 00:21:33.038 user 3m46.466s 00:21:33.038 sys 0m21.011s 00:21:33.038 12:46:05 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.038 12:46:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:33.038 ************************************ 00:21:33.038 END TEST nvmf_dif 00:21:33.038 ************************************ 00:21:33.038 12:46:05 -- common/autotest_common.sh@1142 -- # return 0 00:21:33.038 12:46:05 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:33.038 12:46:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:33.038 12:46:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.038 12:46:05 -- common/autotest_common.sh@10 -- # set +x 00:21:33.038 ************************************ 00:21:33.038 START TEST nvmf_abort_qd_sizes 00:21:33.038 ************************************ 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:33.038 * Looking for test storage... 00:21:33.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:33.038 Cannot find device "nvmf_tgt_br" 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:33.038 Cannot find device "nvmf_tgt_br2" 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:33.038 Cannot find device "nvmf_tgt_br" 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:33.038 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:33.038 Cannot find device "nvmf_tgt_br2" 00:21:33.039 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:33.039 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:33.039 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:33.039 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:33.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.039 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:33.039 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:33.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.039 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:33.039 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:33.039 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:33.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:21:33.297 00:21:33.297 --- 10.0.0.2 ping statistics --- 00:21:33.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.297 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:21:33.297 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:33.297 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:33.297 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:21:33.298 00:21:33.298 --- 10.0.0.3 ping statistics --- 00:21:33.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.298 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:33.298 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:33.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:21:33.298 00:21:33.298 --- 10.0.0.1 ping statistics --- 00:21:33.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.298 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:33.298 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.298 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:33.298 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:33.298 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:33.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:34.124 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:34.124 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:34.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84459 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84459 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84459 ']' 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.124 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:34.382 [2024-07-15 12:46:06.813094] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:34.382 [2024-07-15 12:46:06.813695] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.382 [2024-07-15 12:46:06.946957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.640 [2024-07-15 12:46:07.070471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.640 [2024-07-15 12:46:07.070553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.640 [2024-07-15 12:46:07.070566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.640 [2024-07-15 12:46:07.070575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.640 [2024-07-15 12:46:07.070583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.640 [2024-07-15 12:46:07.070724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.640 [2024-07-15 12:46:07.071245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.640 [2024-07-15 12:46:07.071308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.640 [2024-07-15 12:46:07.071315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.640 [2024-07-15 12:46:07.127023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:35.206 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.207 12:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:35.207 ************************************ 00:21:35.207 START TEST spdk_target_abort 00:21:35.207 ************************************ 00:21:35.207 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:21:35.207 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:35.207 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:35.207 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.207 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.207 spdk_targetn1 00:21:35.207 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.207 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.207 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.207 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.207 [2024-07-15 12:46:07.884349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.464 [2024-07-15 12:46:07.912522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:35.464 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:35.465 12:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:38.753 Initializing NVMe Controllers 00:21:38.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:38.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:38.753 Initialization complete. Launching workers. 00:21:38.753 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10741, failed: 0 00:21:38.754 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1016, failed to submit 9725 00:21:38.754 success 794, unsuccess 222, failed 0 00:21:38.754 12:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:38.754 12:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:42.047 Initializing NVMe Controllers 00:21:42.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:42.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:42.047 Initialization complete. Launching workers. 00:21:42.047 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8973, failed: 0 00:21:42.047 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1136, failed to submit 7837 00:21:42.047 success 421, unsuccess 715, failed 0 00:21:42.047 12:46:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:42.047 12:46:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:45.331 Initializing NVMe Controllers 00:21:45.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:45.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:45.331 Initialization complete. Launching workers. 00:21:45.331 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29980, failed: 0 00:21:45.331 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2273, failed to submit 27707 00:21:45.331 success 386, unsuccess 1887, failed 0 00:21:45.331 12:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:45.331 12:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.331 12:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:45.331 12:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.331 12:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:45.331 12:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.331 12:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84459 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84459 ']' 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84459 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84459 00:21:45.592 killing process with pid 84459 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84459' 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84459 00:21:45.592 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84459 00:21:45.859 00:21:45.859 real 0m10.702s 00:21:45.859 user 0m42.905s 00:21:45.859 sys 0m2.166s 00:21:45.859 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.859 ************************************ 00:21:45.859 END TEST spdk_target_abort 00:21:45.859 12:46:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:45.859 ************************************ 00:21:46.116 12:46:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:46.116 12:46:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:46.116 12:46:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:46.116 12:46:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.116 12:46:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:46.116 ************************************ 00:21:46.116 START TEST kernel_target_abort 00:21:46.116 ************************************ 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:46.116 12:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:46.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:46.373 Waiting for block devices as requested 00:21:46.373 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:46.631 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:46.631 No valid GPT data, bailing 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:46.631 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:46.890 No valid GPT data, bailing 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:46.890 No valid GPT data, bailing 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:46.890 No valid GPT data, bailing 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c --hostid=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c -a 10.0.0.1 -t tcp -s 4420 00:21:46.890 00:21:46.890 Discovery Log Number of Records 2, Generation counter 2 00:21:46.890 =====Discovery Log Entry 0====== 00:21:46.890 trtype: tcp 00:21:46.890 adrfam: ipv4 00:21:46.890 subtype: current discovery subsystem 00:21:46.890 treq: not specified, sq flow control disable supported 00:21:46.890 portid: 1 00:21:46.890 trsvcid: 4420 00:21:46.890 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:46.890 traddr: 10.0.0.1 00:21:46.890 eflags: none 00:21:46.890 sectype: none 00:21:46.890 =====Discovery Log Entry 1====== 00:21:46.890 trtype: tcp 00:21:46.890 adrfam: ipv4 00:21:46.890 subtype: nvme subsystem 00:21:46.890 treq: not specified, sq flow control disable supported 00:21:46.890 portid: 1 00:21:46.890 trsvcid: 4420 00:21:46.890 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:46.890 traddr: 10.0.0.1 00:21:46.890 eflags: none 00:21:46.890 sectype: none 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:46.890 12:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:50.171 Initializing NVMe Controllers 00:21:50.171 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:50.171 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:50.171 Initialization complete. Launching workers. 00:21:50.171 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35112, failed: 0 00:21:50.171 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35112, failed to submit 0 00:21:50.171 success 0, unsuccess 35112, failed 0 00:21:50.171 12:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:50.171 12:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:53.483 Initializing NVMe Controllers 00:21:53.483 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:53.483 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:53.483 Initialization complete. Launching workers. 00:21:53.483 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71070, failed: 0 00:21:53.483 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30657, failed to submit 40413 00:21:53.483 success 0, unsuccess 30657, failed 0 00:21:53.483 12:46:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:53.483 12:46:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:56.769 Initializing NVMe Controllers 00:21:56.769 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:56.769 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:56.769 Initialization complete. Launching workers. 00:21:56.769 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83802, failed: 0 00:21:56.769 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20914, failed to submit 62888 00:21:56.769 success 0, unsuccess 20914, failed 0 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:56.769 12:46:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:57.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:59.232 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:59.232 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:59.490 00:21:59.490 real 0m13.365s 00:21:59.490 user 0m6.236s 00:21:59.490 sys 0m4.451s 00:21:59.490 12:46:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.490 12:46:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:59.490 ************************************ 00:21:59.490 END TEST kernel_target_abort 00:21:59.490 ************************************ 00:21:59.490 12:46:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:59.490 12:46:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:59.490 12:46:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:59.490 12:46:31 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.490 12:46:31 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:59.490 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.490 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:59.490 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.490 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.490 rmmod nvme_tcp 00:21:59.490 rmmod nvme_fabrics 00:21:59.490 rmmod nvme_keyring 00:21:59.490 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.490 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:59.491 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:59.491 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84459 ']' 00:21:59.491 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84459 00:21:59.491 12:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84459 ']' 00:21:59.491 12:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84459 00:21:59.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84459) - No such process 00:21:59.491 12:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84459 is not found' 00:21:59.491 Process with pid 84459 is not found 00:21:59.491 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:59.491 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:59.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:59.747 Waiting for block devices as requested 00:22:00.005 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.005 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:00.005 00:22:00.005 real 0m27.163s 00:22:00.005 user 0m50.234s 00:22:00.005 sys 0m7.915s 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.005 12:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 ************************************ 00:22:00.005 END TEST nvmf_abort_qd_sizes 00:22:00.005 ************************************ 00:22:00.005 12:46:32 -- common/autotest_common.sh@1142 -- # return 0 00:22:00.005 12:46:32 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:00.005 12:46:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:00.005 12:46:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.005 12:46:32 -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 ************************************ 00:22:00.005 START TEST keyring_file 00:22:00.005 ************************************ 00:22:00.005 12:46:32 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:00.265 * Looking for test storage... 00:22:00.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:00.265 12:46:32 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:00.265 12:46:32 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:00.265 12:46:32 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.265 12:46:32 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.265 12:46:32 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.265 12:46:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.265 12:46:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.265 12:46:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.265 12:46:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:00.265 12:46:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@47 -- # : 0 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.265 12:46:32 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.265 12:46:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:00.265 12:46:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:00.265 12:46:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.J92ojIlsy2 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.J92ojIlsy2 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.J92ojIlsy2 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.J92ojIlsy2 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.H3zMYaRo8Q 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:00.266 12:46:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.H3zMYaRo8Q 00:22:00.266 12:46:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.H3zMYaRo8Q 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.H3zMYaRo8Q 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=85319 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85319 00:22:00.266 12:46:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85319 ']' 00:22:00.266 12:46:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.266 12:46:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.266 12:46:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.266 12:46:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.266 12:46:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:00.266 12:46:32 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.266 [2024-07-15 12:46:32.947589] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:00.266 [2024-07-15 12:46:32.947708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85319 ] 00:22:00.522 [2024-07-15 12:46:33.084767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.778 [2024-07-15 12:46:33.207547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.778 [2024-07-15 12:46:33.262663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:01.342 12:46:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.342 [2024-07-15 12:46:33.911714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.342 null0 00:22:01.342 [2024-07-15 12:46:33.943674] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.342 [2024-07-15 12:46:33.943974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:01.342 [2024-07-15 12:46:33.951681] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.342 12:46:33 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.342 12:46:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.342 [2024-07-15 12:46:33.963679] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:01.342 request: 00:22:01.342 { 00:22:01.342 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:01.342 "secure_channel": false, 00:22:01.342 "listen_address": { 00:22:01.342 "trtype": "tcp", 00:22:01.342 "traddr": "127.0.0.1", 00:22:01.342 "trsvcid": "4420" 00:22:01.342 }, 00:22:01.342 "method": "nvmf_subsystem_add_listener", 00:22:01.342 "req_id": 1 00:22:01.342 } 00:22:01.342 Got JSON-RPC error response 00:22:01.342 response: 00:22:01.342 { 00:22:01.342 "code": -32602, 00:22:01.342 "message": "Invalid parameters" 00:22:01.342 } 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:01.343 12:46:33 keyring_file -- keyring/file.sh@46 -- # bperfpid=85336 00:22:01.343 12:46:33 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85336 /var/tmp/bperf.sock 00:22:01.343 12:46:33 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85336 ']' 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.343 12:46:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.343 [2024-07-15 12:46:34.019853] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:01.343 [2024-07-15 12:46:34.019977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85336 ] 00:22:01.600 [2024-07-15 12:46:34.155574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.857 [2024-07-15 12:46:34.291857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.857 [2024-07-15 12:46:34.354397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:02.422 12:46:35 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.422 12:46:35 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:02.422 12:46:35 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.J92ojIlsy2 00:22:02.422 12:46:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.J92ojIlsy2 00:22:02.679 12:46:35 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.H3zMYaRo8Q 00:22:02.679 12:46:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.H3zMYaRo8Q 00:22:03.244 12:46:35 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:22:03.244 12:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.244 12:46:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.244 12:46:35 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:22:03.244 12:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:03.507 12:46:35 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.J92ojIlsy2 == \/\t\m\p\/\t\m\p\.\J\9\2\o\j\I\l\s\y\2 ]] 00:22:03.507 12:46:35 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:22:03.507 12:46:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:03.507 12:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.507 12:46:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.507 12:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:03.765 12:46:36 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.H3zMYaRo8Q == \/\t\m\p\/\t\m\p\.\H\3\z\M\Y\a\R\o\8\Q ]] 00:22:03.765 12:46:36 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:22:03.765 12:46:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:03.765 12:46:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:03.765 12:46:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.765 12:46:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.765 12:46:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:04.021 12:46:36 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:22:04.021 12:46:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:22:04.021 12:46:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:04.021 12:46:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:04.021 12:46:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.021 12:46:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.021 12:46:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:04.278 12:46:36 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:04.278 12:46:36 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:04.278 12:46:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:04.535 [2024-07-15 12:46:37.090050] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.535 nvme0n1 00:22:04.535 12:46:37 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:22:04.535 12:46:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:04.535 12:46:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:04.535 12:46:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.535 12:46:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:04.535 12:46:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:05.114 12:46:37 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:22:05.114 12:46:37 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:22:05.114 12:46:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:05.114 12:46:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:05.114 12:46:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:05.114 12:46:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:05.114 12:46:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:05.375 12:46:37 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:22:05.375 12:46:37 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:05.375 Running I/O for 1 seconds... 00:22:06.309 00:22:06.309 Latency(us) 00:22:06.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.309 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:06.309 nvme0n1 : 1.01 10714.35 41.85 0.00 0.00 11867.36 7923.90 23712.12 00:22:06.309 =================================================================================================================== 00:22:06.309 Total : 10714.35 41.85 0.00 0.00 11867.36 7923.90 23712.12 00:22:06.309 0 00:22:06.309 12:46:38 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:06.309 12:46:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:06.876 12:46:39 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:22:06.877 12:46:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:06.877 12:46:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:06.877 12:46:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:06.877 12:46:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.877 12:46:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:07.148 12:46:39 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:22:07.148 12:46:39 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:22:07.148 12:46:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:07.148 12:46:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:07.148 12:46:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.148 12:46:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.148 12:46:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:07.407 12:46:40 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:07.407 12:46:40 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:07.407 12:46:40 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:07.407 12:46:40 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:07.407 12:46:40 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:07.407 12:46:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.407 12:46:40 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:07.407 12:46:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.407 12:46:40 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:07.407 12:46:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:07.665 [2024-07-15 12:46:40.261400] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:07.665 [2024-07-15 12:46:40.261997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff24f0 (107): Transport endpoint is not connected 00:22:07.665 [2024-07-15 12:46:40.262983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff24f0 (9): Bad file descriptor 00:22:07.665 [2024-07-15 12:46:40.263980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:07.665 [2024-07-15 12:46:40.264005] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:07.665 [2024-07-15 12:46:40.264016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:07.665 request: 00:22:07.665 { 00:22:07.665 "name": "nvme0", 00:22:07.665 "trtype": "tcp", 00:22:07.665 "traddr": "127.0.0.1", 00:22:07.665 "adrfam": "ipv4", 00:22:07.665 "trsvcid": "4420", 00:22:07.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:07.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:07.665 "prchk_reftag": false, 00:22:07.665 "prchk_guard": false, 00:22:07.665 "hdgst": false, 00:22:07.665 "ddgst": false, 00:22:07.665 "psk": "key1", 00:22:07.665 "method": "bdev_nvme_attach_controller", 00:22:07.666 "req_id": 1 00:22:07.666 } 00:22:07.666 Got JSON-RPC error response 00:22:07.666 response: 00:22:07.666 { 00:22:07.666 "code": -5, 00:22:07.666 "message": "Input/output error" 00:22:07.666 } 00:22:07.666 12:46:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:07.666 12:46:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.666 12:46:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.666 12:46:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.666 12:46:40 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:22:07.666 12:46:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:07.666 12:46:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:07.666 12:46:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:07.666 12:46:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.666 12:46:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.924 12:46:40 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:22:07.924 12:46:40 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:22:07.924 12:46:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:07.924 12:46:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:07.924 12:46:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:07.924 12:46:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.924 12:46:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:08.182 12:46:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:08.182 12:46:40 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:22:08.182 12:46:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:08.440 12:46:41 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:22:08.440 12:46:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:08.697 12:46:41 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:22:08.697 12:46:41 keyring_file -- keyring/file.sh@77 -- # jq length 00:22:08.697 12:46:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:08.955 12:46:41 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:22:08.955 12:46:41 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.J92ojIlsy2 00:22:08.955 12:46:41 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.J92ojIlsy2 00:22:08.955 12:46:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:08.955 12:46:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.J92ojIlsy2 00:22:08.955 12:46:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:09.214 12:46:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.214 12:46:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:09.214 12:46:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.214 12:46:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.J92ojIlsy2 00:22:09.214 12:46:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.J92ojIlsy2 00:22:09.214 [2024-07-15 12:46:41.858378] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.J92ojIlsy2': 0100660 00:22:09.214 [2024-07-15 12:46:41.858433] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:09.214 request: 00:22:09.214 { 00:22:09.214 "name": "key0", 00:22:09.214 "path": "/tmp/tmp.J92ojIlsy2", 00:22:09.214 "method": "keyring_file_add_key", 00:22:09.214 "req_id": 1 00:22:09.214 } 00:22:09.214 Got JSON-RPC error response 00:22:09.214 response: 00:22:09.214 { 00:22:09.214 "code": -1, 00:22:09.214 "message": "Operation not permitted" 00:22:09.214 } 00:22:09.214 12:46:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:09.214 12:46:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:09.214 12:46:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:09.214 12:46:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:09.214 12:46:41 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.J92ojIlsy2 00:22:09.214 12:46:41 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.J92ojIlsy2 00:22:09.214 12:46:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.J92ojIlsy2 00:22:09.781 12:46:42 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.J92ojIlsy2 00:22:09.781 12:46:42 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:22:09.781 12:46:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:09.781 12:46:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:09.781 12:46:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.781 12:46:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.781 12:46:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:10.039 12:46:42 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:22:10.039 12:46:42 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.039 12:46:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:10.039 12:46:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.039 12:46:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:10.039 12:46:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.039 12:46:42 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:10.039 12:46:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.039 12:46:42 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.039 12:46:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.297 [2024-07-15 12:46:42.762572] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.J92ojIlsy2': No such file or directory 00:22:10.297 [2024-07-15 12:46:42.762626] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:10.297 [2024-07-15 12:46:42.762653] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:10.297 [2024-07-15 12:46:42.762675] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:10.297 [2024-07-15 12:46:42.762684] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:10.297 request: 00:22:10.297 { 00:22:10.297 "name": "nvme0", 00:22:10.297 "trtype": "tcp", 00:22:10.297 "traddr": "127.0.0.1", 00:22:10.297 "adrfam": "ipv4", 00:22:10.297 "trsvcid": "4420", 00:22:10.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:10.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:10.297 "prchk_reftag": false, 00:22:10.297 "prchk_guard": false, 00:22:10.297 "hdgst": false, 00:22:10.297 "ddgst": false, 00:22:10.297 "psk": "key0", 00:22:10.297 "method": "bdev_nvme_attach_controller", 00:22:10.297 "req_id": 1 00:22:10.297 } 00:22:10.297 Got JSON-RPC error response 00:22:10.297 response: 00:22:10.297 { 00:22:10.297 "code": -19, 00:22:10.297 "message": "No such device" 00:22:10.297 } 00:22:10.297 12:46:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:10.297 12:46:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:10.297 12:46:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:10.297 12:46:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:10.297 12:46:42 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:22:10.297 12:46:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:10.556 12:46:43 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.h8wc41Wwcn 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:10.556 12:46:43 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:10.556 12:46:43 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:10.556 12:46:43 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:10.556 12:46:43 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:10.556 12:46:43 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:10.556 12:46:43 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.h8wc41Wwcn 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.h8wc41Wwcn 00:22:10.556 12:46:43 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.h8wc41Wwcn 00:22:10.556 12:46:43 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h8wc41Wwcn 00:22:10.556 12:46:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h8wc41Wwcn 00:22:10.817 12:46:43 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.817 12:46:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:11.074 nvme0n1 00:22:11.074 12:46:43 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:22:11.074 12:46:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:11.074 12:46:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:11.074 12:46:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:11.074 12:46:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.074 12:46:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:11.640 12:46:44 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:22:11.640 12:46:44 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:22:11.640 12:46:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:11.898 12:46:44 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:22:11.898 12:46:44 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:22:11.898 12:46:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:11.898 12:46:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.898 12:46:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:12.156 12:46:44 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:22:12.156 12:46:44 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:22:12.156 12:46:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:12.156 12:46:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:12.156 12:46:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:12.156 12:46:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:12.156 12:46:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:12.414 12:46:44 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:22:12.414 12:46:44 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:12.414 12:46:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:12.672 12:46:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:22:12.672 12:46:45 keyring_file -- keyring/file.sh@104 -- # jq length 00:22:12.672 12:46:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:12.930 12:46:45 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:22:12.930 12:46:45 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h8wc41Wwcn 00:22:12.930 12:46:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h8wc41Wwcn 00:22:13.189 12:46:45 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.H3zMYaRo8Q 00:22:13.189 12:46:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.H3zMYaRo8Q 00:22:13.189 12:46:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:13.189 12:46:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:13.756 nvme0n1 00:22:13.756 12:46:46 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:22:13.756 12:46:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:14.015 12:46:46 keyring_file -- keyring/file.sh@112 -- # config='{ 00:22:14.015 "subsystems": [ 00:22:14.015 { 00:22:14.015 "subsystem": "keyring", 00:22:14.015 "config": [ 00:22:14.015 { 00:22:14.015 "method": "keyring_file_add_key", 00:22:14.015 "params": { 00:22:14.015 "name": "key0", 00:22:14.015 "path": "/tmp/tmp.h8wc41Wwcn" 00:22:14.015 } 00:22:14.015 }, 00:22:14.015 { 00:22:14.015 "method": "keyring_file_add_key", 00:22:14.015 "params": { 00:22:14.015 "name": "key1", 00:22:14.015 "path": "/tmp/tmp.H3zMYaRo8Q" 00:22:14.015 } 00:22:14.015 } 00:22:14.015 ] 00:22:14.015 }, 00:22:14.015 { 00:22:14.015 "subsystem": "iobuf", 00:22:14.015 "config": [ 00:22:14.015 { 00:22:14.015 "method": "iobuf_set_options", 00:22:14.015 "params": { 00:22:14.015 "small_pool_count": 8192, 00:22:14.015 "large_pool_count": 1024, 00:22:14.015 "small_bufsize": 8192, 00:22:14.015 "large_bufsize": 135168 00:22:14.015 } 00:22:14.015 } 00:22:14.015 ] 00:22:14.015 }, 00:22:14.015 { 00:22:14.015 "subsystem": "sock", 00:22:14.015 "config": [ 00:22:14.015 { 00:22:14.015 "method": "sock_set_default_impl", 00:22:14.015 "params": { 00:22:14.015 "impl_name": "uring" 00:22:14.015 } 00:22:14.015 }, 00:22:14.015 { 00:22:14.015 "method": "sock_impl_set_options", 00:22:14.015 "params": { 00:22:14.015 "impl_name": "ssl", 00:22:14.015 "recv_buf_size": 4096, 00:22:14.015 "send_buf_size": 4096, 00:22:14.015 "enable_recv_pipe": true, 00:22:14.015 "enable_quickack": false, 00:22:14.015 "enable_placement_id": 0, 00:22:14.015 "enable_zerocopy_send_server": true, 00:22:14.015 "enable_zerocopy_send_client": false, 00:22:14.015 "zerocopy_threshold": 0, 00:22:14.015 "tls_version": 0, 00:22:14.015 "enable_ktls": false 00:22:14.015 } 00:22:14.015 }, 00:22:14.015 { 00:22:14.015 "method": "sock_impl_set_options", 00:22:14.015 "params": { 00:22:14.015 "impl_name": "posix", 00:22:14.015 "recv_buf_size": 2097152, 00:22:14.015 "send_buf_size": 2097152, 00:22:14.015 "enable_recv_pipe": true, 00:22:14.015 "enable_quickack": false, 00:22:14.015 "enable_placement_id": 0, 00:22:14.015 "enable_zerocopy_send_server": true, 00:22:14.015 "enable_zerocopy_send_client": false, 00:22:14.015 "zerocopy_threshold": 0, 00:22:14.015 "tls_version": 0, 00:22:14.016 "enable_ktls": false 00:22:14.016 } 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "method": "sock_impl_set_options", 00:22:14.016 "params": { 00:22:14.016 "impl_name": "uring", 00:22:14.016 "recv_buf_size": 2097152, 00:22:14.016 "send_buf_size": 2097152, 00:22:14.016 "enable_recv_pipe": true, 00:22:14.016 "enable_quickack": false, 00:22:14.016 "enable_placement_id": 0, 00:22:14.016 "enable_zerocopy_send_server": false, 00:22:14.016 "enable_zerocopy_send_client": false, 00:22:14.016 "zerocopy_threshold": 0, 00:22:14.016 "tls_version": 0, 00:22:14.016 "enable_ktls": false 00:22:14.016 } 00:22:14.016 } 00:22:14.016 ] 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "subsystem": "vmd", 00:22:14.016 "config": [] 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "subsystem": "accel", 00:22:14.016 "config": [ 00:22:14.016 { 00:22:14.016 "method": "accel_set_options", 00:22:14.016 "params": { 00:22:14.016 "small_cache_size": 128, 00:22:14.016 "large_cache_size": 16, 00:22:14.016 "task_count": 2048, 00:22:14.016 "sequence_count": 2048, 00:22:14.016 "buf_count": 2048 00:22:14.016 } 00:22:14.016 } 00:22:14.016 ] 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "subsystem": "bdev", 00:22:14.016 "config": [ 00:22:14.016 { 00:22:14.016 "method": "bdev_set_options", 00:22:14.016 "params": { 00:22:14.016 "bdev_io_pool_size": 65535, 00:22:14.016 "bdev_io_cache_size": 256, 00:22:14.016 "bdev_auto_examine": true, 00:22:14.016 "iobuf_small_cache_size": 128, 00:22:14.016 "iobuf_large_cache_size": 16 00:22:14.016 } 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "method": "bdev_raid_set_options", 00:22:14.016 "params": { 00:22:14.016 "process_window_size_kb": 1024 00:22:14.016 } 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "method": "bdev_iscsi_set_options", 00:22:14.016 "params": { 00:22:14.016 "timeout_sec": 30 00:22:14.016 } 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "method": "bdev_nvme_set_options", 00:22:14.016 "params": { 00:22:14.016 "action_on_timeout": "none", 00:22:14.016 "timeout_us": 0, 00:22:14.016 "timeout_admin_us": 0, 00:22:14.016 "keep_alive_timeout_ms": 10000, 00:22:14.016 "arbitration_burst": 0, 00:22:14.016 "low_priority_weight": 0, 00:22:14.016 "medium_priority_weight": 0, 00:22:14.016 "high_priority_weight": 0, 00:22:14.016 "nvme_adminq_poll_period_us": 10000, 00:22:14.016 "nvme_ioq_poll_period_us": 0, 00:22:14.016 "io_queue_requests": 512, 00:22:14.016 "delay_cmd_submit": true, 00:22:14.016 "transport_retry_count": 4, 00:22:14.016 "bdev_retry_count": 3, 00:22:14.016 "transport_ack_timeout": 0, 00:22:14.016 "ctrlr_loss_timeout_sec": 0, 00:22:14.016 "reconnect_delay_sec": 0, 00:22:14.016 "fast_io_fail_timeout_sec": 0, 00:22:14.016 "disable_auto_failback": false, 00:22:14.016 "generate_uuids": false, 00:22:14.016 "transport_tos": 0, 00:22:14.016 "nvme_error_stat": false, 00:22:14.016 "rdma_srq_size": 0, 00:22:14.016 "io_path_stat": false, 00:22:14.016 "allow_accel_sequence": false, 00:22:14.016 "rdma_max_cq_size": 0, 00:22:14.016 "rdma_cm_event_timeout_ms": 0, 00:22:14.016 "dhchap_digests": [ 00:22:14.016 "sha256", 00:22:14.016 "sha384", 00:22:14.016 "sha512" 00:22:14.016 ], 00:22:14.016 "dhchap_dhgroups": [ 00:22:14.016 "null", 00:22:14.016 "ffdhe2048", 00:22:14.016 "ffdhe3072", 00:22:14.016 "ffdhe4096", 00:22:14.016 "ffdhe6144", 00:22:14.016 "ffdhe8192" 00:22:14.016 ] 00:22:14.016 } 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "method": "bdev_nvme_attach_controller", 00:22:14.016 "params": { 00:22:14.016 "name": "nvme0", 00:22:14.016 "trtype": "TCP", 00:22:14.016 "adrfam": "IPv4", 00:22:14.016 "traddr": "127.0.0.1", 00:22:14.016 "trsvcid": "4420", 00:22:14.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.016 "prchk_reftag": false, 00:22:14.016 "prchk_guard": false, 00:22:14.016 "ctrlr_loss_timeout_sec": 0, 00:22:14.016 "reconnect_delay_sec": 0, 00:22:14.016 "fast_io_fail_timeout_sec": 0, 00:22:14.016 "psk": "key0", 00:22:14.016 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:14.016 "hdgst": false, 00:22:14.016 "ddgst": false 00:22:14.016 } 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "method": "bdev_nvme_set_hotplug", 00:22:14.016 "params": { 00:22:14.016 "period_us": 100000, 00:22:14.016 "enable": false 00:22:14.016 } 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "method": "bdev_wait_for_examine" 00:22:14.016 } 00:22:14.016 ] 00:22:14.016 }, 00:22:14.016 { 00:22:14.016 "subsystem": "nbd", 00:22:14.016 "config": [] 00:22:14.016 } 00:22:14.016 ] 00:22:14.016 }' 00:22:14.016 12:46:46 keyring_file -- keyring/file.sh@114 -- # killprocess 85336 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85336 ']' 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85336 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85336 00:22:14.016 killing process with pid 85336 00:22:14.016 Received shutdown signal, test time was about 1.000000 seconds 00:22:14.016 00:22:14.016 Latency(us) 00:22:14.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.016 =================================================================================================================== 00:22:14.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85336' 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@967 -- # kill 85336 00:22:14.016 12:46:46 keyring_file -- common/autotest_common.sh@972 -- # wait 85336 00:22:14.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:14.275 12:46:46 keyring_file -- keyring/file.sh@117 -- # bperfpid=85591 00:22:14.275 12:46:46 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85591 /var/tmp/bperf.sock 00:22:14.275 12:46:46 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85591 ']' 00:22:14.275 12:46:46 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:14.275 12:46:46 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.275 12:46:46 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:14.275 12:46:46 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.275 12:46:46 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:14.275 12:46:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:14.275 12:46:46 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:22:14.275 "subsystems": [ 00:22:14.275 { 00:22:14.275 "subsystem": "keyring", 00:22:14.275 "config": [ 00:22:14.275 { 00:22:14.275 "method": "keyring_file_add_key", 00:22:14.275 "params": { 00:22:14.275 "name": "key0", 00:22:14.275 "path": "/tmp/tmp.h8wc41Wwcn" 00:22:14.275 } 00:22:14.275 }, 00:22:14.275 { 00:22:14.275 "method": "keyring_file_add_key", 00:22:14.275 "params": { 00:22:14.275 "name": "key1", 00:22:14.275 "path": "/tmp/tmp.H3zMYaRo8Q" 00:22:14.275 } 00:22:14.275 } 00:22:14.275 ] 00:22:14.275 }, 00:22:14.275 { 00:22:14.275 "subsystem": "iobuf", 00:22:14.275 "config": [ 00:22:14.275 { 00:22:14.275 "method": "iobuf_set_options", 00:22:14.275 "params": { 00:22:14.275 "small_pool_count": 8192, 00:22:14.275 "large_pool_count": 1024, 00:22:14.275 "small_bufsize": 8192, 00:22:14.275 "large_bufsize": 135168 00:22:14.275 } 00:22:14.275 } 00:22:14.275 ] 00:22:14.275 }, 00:22:14.275 { 00:22:14.275 "subsystem": "sock", 00:22:14.275 "config": [ 00:22:14.275 { 00:22:14.275 "method": "sock_set_default_impl", 00:22:14.275 "params": { 00:22:14.275 "impl_name": "uring" 00:22:14.275 } 00:22:14.275 }, 00:22:14.275 { 00:22:14.275 "method": "sock_impl_set_options", 00:22:14.275 "params": { 00:22:14.275 "impl_name": "ssl", 00:22:14.275 "recv_buf_size": 4096, 00:22:14.275 "send_buf_size": 4096, 00:22:14.275 "enable_recv_pipe": true, 00:22:14.275 "enable_quickack": false, 00:22:14.275 "enable_placement_id": 0, 00:22:14.275 "enable_zerocopy_send_server": true, 00:22:14.275 "enable_zerocopy_send_client": false, 00:22:14.275 "zerocopy_threshold": 0, 00:22:14.275 "tls_version": 0, 00:22:14.275 "enable_ktls": false 00:22:14.275 } 00:22:14.275 }, 00:22:14.275 { 00:22:14.275 "method": "sock_impl_set_options", 00:22:14.275 "params": { 00:22:14.275 "impl_name": "posix", 00:22:14.275 "recv_buf_size": 2097152, 00:22:14.275 "send_buf_size": 2097152, 00:22:14.275 "enable_recv_pipe": true, 00:22:14.275 "enable_quickack": false, 00:22:14.275 "enable_placement_id": 0, 00:22:14.275 "enable_zerocopy_send_server": true, 00:22:14.276 "enable_zerocopy_send_client": false, 00:22:14.276 "zerocopy_threshold": 0, 00:22:14.276 "tls_version": 0, 00:22:14.276 "enable_ktls": false 00:22:14.276 } 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "method": "sock_impl_set_options", 00:22:14.276 "params": { 00:22:14.276 "impl_name": "uring", 00:22:14.276 "recv_buf_size": 2097152, 00:22:14.276 "send_buf_size": 2097152, 00:22:14.276 "enable_recv_pipe": true, 00:22:14.276 "enable_quickack": false, 00:22:14.276 "enable_placement_id": 0, 00:22:14.276 "enable_zerocopy_send_server": false, 00:22:14.276 "enable_zerocopy_send_client": false, 00:22:14.276 "zerocopy_threshold": 0, 00:22:14.276 "tls_version": 0, 00:22:14.276 "enable_ktls": false 00:22:14.276 } 00:22:14.276 } 00:22:14.276 ] 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "subsystem": "vmd", 00:22:14.276 "config": [] 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "subsystem": "accel", 00:22:14.276 "config": [ 00:22:14.276 { 00:22:14.276 "method": "accel_set_options", 00:22:14.276 "params": { 00:22:14.276 "small_cache_size": 128, 00:22:14.276 "large_cache_size": 16, 00:22:14.276 "task_count": 2048, 00:22:14.276 "sequence_count": 2048, 00:22:14.276 "buf_count": 2048 00:22:14.276 } 00:22:14.276 } 00:22:14.276 ] 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "subsystem": "bdev", 00:22:14.276 "config": [ 00:22:14.276 { 00:22:14.276 "method": "bdev_set_options", 00:22:14.276 "params": { 00:22:14.276 "bdev_io_pool_size": 65535, 00:22:14.276 "bdev_io_cache_size": 256, 00:22:14.276 "bdev_auto_examine": true, 00:22:14.276 "iobuf_small_cache_size": 128, 00:22:14.276 "iobuf_large_cache_size": 16 00:22:14.276 } 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "method": "bdev_raid_set_options", 00:22:14.276 "params": { 00:22:14.276 "process_window_size_kb": 1024 00:22:14.276 } 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "method": "bdev_iscsi_set_options", 00:22:14.276 "params": { 00:22:14.276 "timeout_sec": 30 00:22:14.276 } 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "method": "bdev_nvme_set_options", 00:22:14.276 "params": { 00:22:14.276 "action_on_timeout": "none", 00:22:14.276 "timeout_us": 0, 00:22:14.276 "timeout_admin_us": 0, 00:22:14.276 "keep_alive_timeout_ms": 10000, 00:22:14.276 "arbitration_burst": 0, 00:22:14.276 "low_priority_weight": 0, 00:22:14.276 "medium_priority_weight": 0, 00:22:14.276 "high_priority_weight": 0, 00:22:14.276 "nvme_adminq_poll_period_us": 10000, 00:22:14.276 "nvme_ioq_poll_period_us": 0, 00:22:14.276 "io_queue_requests": 512, 00:22:14.276 "delay_cmd_submit": true, 00:22:14.276 "transport_retry_count": 4, 00:22:14.276 "bdev_retry_count": 3, 00:22:14.276 "transport_ack_timeout": 0, 00:22:14.276 "ctrlr_loss_timeout_sec": 0, 00:22:14.276 "reconnect_delay_sec": 0, 00:22:14.276 "fast_io_fail_timeout_sec": 0, 00:22:14.276 "disable_auto_failback": false, 00:22:14.276 "generate_uuids": false, 00:22:14.276 "transport_tos": 0, 00:22:14.276 "nvme_error_stat": false, 00:22:14.276 "rdma_srq_size": 0, 00:22:14.276 "io_path_stat": false, 00:22:14.276 "allow_accel_sequence": false, 00:22:14.276 "rdma_max_cq_size": 0, 00:22:14.276 "rdma_cm_event_timeout_ms": 0, 00:22:14.276 "dhchap_digests": [ 00:22:14.276 "sha256", 00:22:14.276 "sha384", 00:22:14.276 "sha512" 00:22:14.276 ], 00:22:14.276 "dhchap_dhgroups": [ 00:22:14.276 "null", 00:22:14.276 "ffdhe2048", 00:22:14.276 "ffdhe3072", 00:22:14.276 "ffdhe4096", 00:22:14.276 "ffdhe6144", 00:22:14.276 "ffdhe8192" 00:22:14.276 ] 00:22:14.276 } 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "method": "bdev_nvme_attach_controller", 00:22:14.276 "params": { 00:22:14.276 "name": "nvme0", 00:22:14.276 "trtype": "TCP", 00:22:14.276 "adrfam": "IPv4", 00:22:14.276 "traddr": "127.0.0.1", 00:22:14.276 "trsvcid": "4420", 00:22:14.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.276 "prchk_reftag": false, 00:22:14.276 "prchk_guard": false, 00:22:14.276 "ctrlr_loss_timeout_sec": 0, 00:22:14.276 "reconnect_delay_sec": 0, 00:22:14.276 "fast_io_fail_timeout_sec": 0, 00:22:14.276 "psk": "key0", 00:22:14.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:14.276 "hdgst": false, 00:22:14.276 "ddgst": false 00:22:14.276 } 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "method": "bdev_nvme_set_hotplug", 00:22:14.276 "params": { 00:22:14.276 "period_us": 100000, 00:22:14.276 "enable": false 00:22:14.276 } 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "method": "bdev_wait_for_examine" 00:22:14.276 } 00:22:14.276 ] 00:22:14.276 }, 00:22:14.276 { 00:22:14.276 "subsystem": "nbd", 00:22:14.276 "config": [] 00:22:14.276 } 00:22:14.276 ] 00:22:14.276 }' 00:22:14.276 [2024-07-15 12:46:46.860473] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:14.276 [2024-07-15 12:46:46.861461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85591 ] 00:22:14.546 [2024-07-15 12:46:47.001452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.546 [2024-07-15 12:46:47.120880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.805 [2024-07-15 12:46:47.256494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:14.805 [2024-07-15 12:46:47.312101] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:15.373 12:46:47 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.373 12:46:47 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:15.373 12:46:47 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:22:15.373 12:46:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.373 12:46:47 keyring_file -- keyring/file.sh@120 -- # jq length 00:22:15.631 12:46:48 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:22:15.631 12:46:48 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:22:15.631 12:46:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:15.631 12:46:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:15.631 12:46:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:15.631 12:46:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.631 12:46:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:15.889 12:46:48 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:15.889 12:46:48 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:22:15.889 12:46:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:15.889 12:46:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:15.889 12:46:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:15.889 12:46:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:15.889 12:46:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:16.148 12:46:48 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:22:16.148 12:46:48 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:22:16.148 12:46:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:16.148 12:46:48 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:22:16.407 12:46:48 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:22:16.407 12:46:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:16.407 12:46:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.h8wc41Wwcn /tmp/tmp.H3zMYaRo8Q 00:22:16.407 12:46:48 keyring_file -- keyring/file.sh@20 -- # killprocess 85591 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85591 ']' 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85591 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85591 00:22:16.407 killing process with pid 85591 00:22:16.407 Received shutdown signal, test time was about 1.000000 seconds 00:22:16.407 00:22:16.407 Latency(us) 00:22:16.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.407 =================================================================================================================== 00:22:16.407 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85591' 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@967 -- # kill 85591 00:22:16.407 12:46:48 keyring_file -- common/autotest_common.sh@972 -- # wait 85591 00:22:16.666 12:46:49 keyring_file -- keyring/file.sh@21 -- # killprocess 85319 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85319 ']' 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85319 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85319 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:16.666 killing process with pid 85319 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85319' 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@967 -- # kill 85319 00:22:16.666 [2024-07-15 12:46:49.271521] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:16.666 12:46:49 keyring_file -- common/autotest_common.sh@972 -- # wait 85319 00:22:17.234 00:22:17.234 real 0m16.998s 00:22:17.234 user 0m42.608s 00:22:17.234 sys 0m3.382s 00:22:17.234 12:46:49 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:17.234 12:46:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:17.234 ************************************ 00:22:17.234 END TEST keyring_file 00:22:17.234 ************************************ 00:22:17.234 12:46:49 -- common/autotest_common.sh@1142 -- # return 0 00:22:17.234 12:46:49 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:22:17.234 12:46:49 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:17.234 12:46:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:17.234 12:46:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:17.234 12:46:49 -- common/autotest_common.sh@10 -- # set +x 00:22:17.234 ************************************ 00:22:17.234 START TEST keyring_linux 00:22:17.234 ************************************ 00:22:17.234 12:46:49 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:17.234 * Looking for test storage... 00:22:17.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:17.234 12:46:49 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:17.234 12:46:49 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:17.234 12:46:49 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:17.234 12:46:49 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.234 12:46:49 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.234 12:46:49 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.234 12:46:49 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.234 12:46:49 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.234 12:46:49 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=88a3e75c-4ef2-471b-8ebd-334c2f5a6b1c 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:17.235 12:46:49 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.235 12:46:49 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.235 12:46:49 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.235 12:46:49 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.235 12:46:49 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.235 12:46:49 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.235 12:46:49 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:17.235 12:46:49 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:17.235 12:46:49 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:17.235 12:46:49 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:17.235 12:46:49 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:17.235 12:46:49 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:17.235 12:46:49 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:17.235 12:46:49 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:17.235 /tmp/:spdk-test:key0 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:17.235 12:46:49 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:17.235 12:46:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:17.235 12:46:49 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:17.495 12:46:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:17.495 /tmp/:spdk-test:key1 00:22:17.495 12:46:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:17.495 12:46:49 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:17.495 12:46:49 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85708 00:22:17.495 12:46:49 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85708 00:22:17.495 12:46:49 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85708 ']' 00:22:17.495 12:46:49 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.495 12:46:49 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.495 12:46:49 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.495 12:46:49 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.495 12:46:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:17.495 [2024-07-15 12:46:50.003754] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:17.495 [2024-07-15 12:46:50.003885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85708 ] 00:22:17.495 [2024-07-15 12:46:50.147417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.754 [2024-07-15 12:46:50.276489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.754 [2024-07-15 12:46:50.334190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:18.322 12:46:51 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.322 12:46:51 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:18.322 12:46:51 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:18.595 12:46:51 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.595 12:46:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:18.595 [2024-07-15 12:46:51.010973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.595 null0 00:22:18.595 [2024-07-15 12:46:51.042914] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:18.595 [2024-07-15 12:46:51.043154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:18.595 12:46:51 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.595 12:46:51 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:18.595 710049436 00:22:18.595 12:46:51 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:18.595 376529228 00:22:18.595 12:46:51 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85726 00:22:18.595 12:46:51 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:18.595 12:46:51 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85726 /var/tmp/bperf.sock 00:22:18.595 12:46:51 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85726 ']' 00:22:18.595 12:46:51 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:18.595 12:46:51 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:18.595 12:46:51 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:18.595 12:46:51 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.595 12:46:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:18.595 [2024-07-15 12:46:51.116444] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:18.595 [2024-07-15 12:46:51.116518] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85726 ] 00:22:18.595 [2024-07-15 12:46:51.253051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.865 [2024-07-15 12:46:51.383529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.430 12:46:52 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.430 12:46:52 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:19.430 12:46:52 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:19.430 12:46:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:19.687 12:46:52 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:19.687 12:46:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:19.945 [2024-07-15 12:46:52.524371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:19.945 12:46:52 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:19.945 12:46:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:20.203 [2024-07-15 12:46:52.785983] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.203 nvme0n1 00:22:20.203 12:46:52 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:20.203 12:46:52 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:20.203 12:46:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:20.203 12:46:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:20.203 12:46:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:20.203 12:46:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:20.767 12:46:53 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:20.767 12:46:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:20.767 12:46:53 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:20.767 12:46:53 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:20.767 12:46:53 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:20.767 12:46:53 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:20.767 12:46:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:20.767 12:46:53 keyring_linux -- keyring/linux.sh@25 -- # sn=710049436 00:22:20.767 12:46:53 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:20.767 12:46:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:21.024 12:46:53 keyring_linux -- keyring/linux.sh@26 -- # [[ 710049436 == \7\1\0\0\4\9\4\3\6 ]] 00:22:21.024 12:46:53 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 710049436 00:22:21.024 12:46:53 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:21.024 12:46:53 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:21.024 Running I/O for 1 seconds... 00:22:21.955 00:22:21.955 Latency(us) 00:22:21.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.955 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:21.955 nvme0n1 : 1.01 11536.03 45.06 0.00 0.00 11026.26 3664.06 13583.83 00:22:21.955 =================================================================================================================== 00:22:21.955 Total : 11536.03 45.06 0.00 0.00 11026.26 3664.06 13583.83 00:22:21.955 0 00:22:21.955 12:46:54 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:21.956 12:46:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:22.224 12:46:54 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:22.224 12:46:54 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:22.224 12:46:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:22.224 12:46:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:22.225 12:46:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:22.225 12:46:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:22.509 12:46:55 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:22.509 12:46:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:22.509 12:46:55 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:22.509 12:46:55 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:22.509 12:46:55 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:22:22.509 12:46:55 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:22.509 12:46:55 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:22.509 12:46:55 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.509 12:46:55 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:22.509 12:46:55 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.509 12:46:55 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:22.509 12:46:55 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:23.077 [2024-07-15 12:46:55.457449] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:23.077 [2024-07-15 12:46:55.458465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e460 (107): Transport endpoint is not connected 00:22:23.077 [2024-07-15 12:46:55.459434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e460 (9): Bad file descriptor 00:22:23.077 [2024-07-15 12:46:55.460425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.077 [2024-07-15 12:46:55.460453] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:23.077 [2024-07-15 12:46:55.460466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.077 request: 00:22:23.077 { 00:22:23.077 "name": "nvme0", 00:22:23.077 "trtype": "tcp", 00:22:23.077 "traddr": "127.0.0.1", 00:22:23.077 "adrfam": "ipv4", 00:22:23.077 "trsvcid": "4420", 00:22:23.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:23.077 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:23.077 "prchk_reftag": false, 00:22:23.077 "prchk_guard": false, 00:22:23.077 "hdgst": false, 00:22:23.077 "ddgst": false, 00:22:23.077 "psk": ":spdk-test:key1", 00:22:23.077 "method": "bdev_nvme_attach_controller", 00:22:23.077 "req_id": 1 00:22:23.077 } 00:22:23.077 Got JSON-RPC error response 00:22:23.077 response: 00:22:23.077 { 00:22:23.077 "code": -5, 00:22:23.077 "message": "Input/output error" 00:22:23.077 } 00:22:23.077 12:46:55 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:22:23.077 12:46:55 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:23.077 12:46:55 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:23.077 12:46:55 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:23.077 12:46:55 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:23.077 12:46:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@33 -- # sn=710049436 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 710049436 00:22:23.078 1 links removed 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@33 -- # sn=376529228 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 376529228 00:22:23.078 1 links removed 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85726 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85726 ']' 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85726 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85726 00:22:23.078 killing process with pid 85726 00:22:23.078 Received shutdown signal, test time was about 1.000000 seconds 00:22:23.078 00:22:23.078 Latency(us) 00:22:23.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.078 =================================================================================================================== 00:22:23.078 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85726' 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@967 -- # kill 85726 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@972 -- # wait 85726 00:22:23.078 12:46:55 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85708 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85708 ']' 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85708 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.078 12:46:55 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85708 00:22:23.337 killing process with pid 85708 00:22:23.337 12:46:55 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:23.337 12:46:55 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:23.337 12:46:55 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85708' 00:22:23.337 12:46:55 keyring_linux -- common/autotest_common.sh@967 -- # kill 85708 00:22:23.337 12:46:55 keyring_linux -- common/autotest_common.sh@972 -- # wait 85708 00:22:23.596 ************************************ 00:22:23.596 END TEST keyring_linux 00:22:23.596 ************************************ 00:22:23.596 00:22:23.596 real 0m6.455s 00:22:23.596 user 0m12.453s 00:22:23.596 sys 0m1.687s 00:22:23.596 12:46:56 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:23.596 12:46:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:23.596 12:46:56 -- common/autotest_common.sh@1142 -- # return 0 00:22:23.596 12:46:56 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:23.596 12:46:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:23.596 12:46:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:23.596 12:46:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:23.596 12:46:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:23.596 12:46:56 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:23.596 12:46:56 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:23.596 12:46:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.596 12:46:56 -- common/autotest_common.sh@10 -- # set +x 00:22:23.596 12:46:56 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:23.596 12:46:56 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:23.596 12:46:56 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:23.596 12:46:56 -- common/autotest_common.sh@10 -- # set +x 00:22:25.496 INFO: APP EXITING 00:22:25.496 INFO: killing all VMs 00:22:25.496 INFO: killing vhost app 00:22:25.496 INFO: EXIT DONE 00:22:25.755 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:26.013 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:26.013 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:26.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:26.580 Cleaning 00:22:26.580 Removing: /var/run/dpdk/spdk0/config 00:22:26.580 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:26.580 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:26.580 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:26.580 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:26.580 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:26.580 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:26.580 Removing: /var/run/dpdk/spdk1/config 00:22:26.580 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:26.580 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:26.580 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:26.580 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:26.580 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:26.580 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:26.580 Removing: /var/run/dpdk/spdk2/config 00:22:26.580 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:26.580 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:26.580 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:26.580 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:26.580 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:26.580 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:26.580 Removing: /var/run/dpdk/spdk3/config 00:22:26.580 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:26.580 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:26.580 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:26.580 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:26.580 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:26.580 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:26.580 Removing: /var/run/dpdk/spdk4/config 00:22:26.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:26.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:26.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:26.838 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:26.838 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:26.838 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:26.838 Removing: /dev/shm/nvmf_trace.0 00:22:26.838 Removing: /dev/shm/spdk_tgt_trace.pid58750 00:22:26.838 Removing: /var/run/dpdk/spdk0 00:22:26.838 Removing: /var/run/dpdk/spdk1 00:22:26.838 Removing: /var/run/dpdk/spdk2 00:22:26.838 Removing: /var/run/dpdk/spdk3 00:22:26.838 Removing: /var/run/dpdk/spdk4 00:22:26.838 Removing: /var/run/dpdk/spdk_pid58599 00:22:26.838 Removing: /var/run/dpdk/spdk_pid58750 00:22:26.838 Removing: /var/run/dpdk/spdk_pid58937 00:22:26.838 Removing: /var/run/dpdk/spdk_pid59029 00:22:26.838 Removing: /var/run/dpdk/spdk_pid59051 00:22:26.838 Removing: /var/run/dpdk/spdk_pid59166 00:22:26.838 Removing: /var/run/dpdk/spdk_pid59184 00:22:26.838 Removing: /var/run/dpdk/spdk_pid59302 00:22:26.838 Removing: /var/run/dpdk/spdk_pid59498 00:22:26.839 Removing: /var/run/dpdk/spdk_pid59643 00:22:26.839 Removing: /var/run/dpdk/spdk_pid59703 00:22:26.839 Removing: /var/run/dpdk/spdk_pid59779 00:22:26.839 Removing: /var/run/dpdk/spdk_pid59870 00:22:26.839 Removing: /var/run/dpdk/spdk_pid59942 00:22:26.839 Removing: /var/run/dpdk/spdk_pid59980 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60016 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60077 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60177 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60615 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60662 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60707 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60723 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60801 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60817 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60890 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60906 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60951 00:22:26.839 Removing: /var/run/dpdk/spdk_pid60969 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61015 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61033 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61161 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61191 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61265 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61317 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61347 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61405 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61440 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61473 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61509 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61543 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61578 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61611 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61647 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61686 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61718 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61753 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61787 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61822 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61857 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61892 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61926 00:22:26.839 Removing: /var/run/dpdk/spdk_pid61963 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62000 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62038 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62072 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62108 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62178 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62272 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62580 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62592 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62623 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62642 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62663 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62682 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62701 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62711 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62741 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62749 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62770 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62789 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62808 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62828 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62848 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62862 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62877 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62896 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62915 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62931 00:22:26.839 Removing: /var/run/dpdk/spdk_pid62967 00:22:27.097 Removing: /var/run/dpdk/spdk_pid62980 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63015 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63074 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63102 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63112 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63140 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63155 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63163 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63205 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63219 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63253 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63262 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63272 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63281 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63291 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63306 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63314 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63325 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63359 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63380 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63395 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63429 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63433 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63446 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63482 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63499 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63531 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63533 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63546 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63554 00:22:27.097 Removing: /var/run/dpdk/spdk_pid63561 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63574 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63576 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63589 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63663 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63711 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63821 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63854 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63894 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63914 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63936 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63956 00:22:27.098 Removing: /var/run/dpdk/spdk_pid63985 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64006 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64076 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64092 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64147 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64218 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64285 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64314 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64400 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64448 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64486 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64699 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64796 00:22:27.098 Removing: /var/run/dpdk/spdk_pid64825 00:22:27.098 Removing: /var/run/dpdk/spdk_pid65150 00:22:27.098 Removing: /var/run/dpdk/spdk_pid65188 00:22:27.098 Removing: /var/run/dpdk/spdk_pid65476 00:22:27.098 Removing: /var/run/dpdk/spdk_pid65887 00:22:27.098 Removing: /var/run/dpdk/spdk_pid66159 00:22:27.098 Removing: /var/run/dpdk/spdk_pid66931 00:22:27.098 Removing: /var/run/dpdk/spdk_pid67759 00:22:27.098 Removing: /var/run/dpdk/spdk_pid67877 00:22:27.098 Removing: /var/run/dpdk/spdk_pid67944 00:22:27.098 Removing: /var/run/dpdk/spdk_pid69198 00:22:27.098 Removing: /var/run/dpdk/spdk_pid69411 00:22:27.098 Removing: /var/run/dpdk/spdk_pid72756 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73081 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73191 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73319 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73347 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73374 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73402 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73494 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73629 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73784 00:22:27.098 Removing: /var/run/dpdk/spdk_pid73865 00:22:27.098 Removing: /var/run/dpdk/spdk_pid74058 00:22:27.098 Removing: /var/run/dpdk/spdk_pid74136 00:22:27.098 Removing: /var/run/dpdk/spdk_pid74234 00:22:27.098 Removing: /var/run/dpdk/spdk_pid74548 00:22:27.098 Removing: /var/run/dpdk/spdk_pid74927 00:22:27.098 Removing: /var/run/dpdk/spdk_pid74936 00:22:27.098 Removing: /var/run/dpdk/spdk_pid75211 00:22:27.098 Removing: /var/run/dpdk/spdk_pid75225 00:22:27.098 Removing: /var/run/dpdk/spdk_pid75239 00:22:27.098 Removing: /var/run/dpdk/spdk_pid75273 00:22:27.098 Removing: /var/run/dpdk/spdk_pid75283 00:22:27.098 Removing: /var/run/dpdk/spdk_pid75589 00:22:27.098 Removing: /var/run/dpdk/spdk_pid75633 00:22:27.357 Removing: /var/run/dpdk/spdk_pid75909 00:22:27.357 Removing: /var/run/dpdk/spdk_pid76105 00:22:27.357 Removing: /var/run/dpdk/spdk_pid76484 00:22:27.357 Removing: /var/run/dpdk/spdk_pid76993 00:22:27.357 Removing: /var/run/dpdk/spdk_pid77805 00:22:27.357 Removing: /var/run/dpdk/spdk_pid78390 00:22:27.357 Removing: /var/run/dpdk/spdk_pid78392 00:22:27.357 Removing: /var/run/dpdk/spdk_pid80295 00:22:27.357 Removing: /var/run/dpdk/spdk_pid80355 00:22:27.357 Removing: /var/run/dpdk/spdk_pid80410 00:22:27.357 Removing: /var/run/dpdk/spdk_pid80476 00:22:27.357 Removing: /var/run/dpdk/spdk_pid80591 00:22:27.357 Removing: /var/run/dpdk/spdk_pid80650 00:22:27.357 Removing: /var/run/dpdk/spdk_pid80712 00:22:27.357 Removing: /var/run/dpdk/spdk_pid80772 00:22:27.357 Removing: /var/run/dpdk/spdk_pid81092 00:22:27.357 Removing: /var/run/dpdk/spdk_pid82252 00:22:27.357 Removing: /var/run/dpdk/spdk_pid82398 00:22:27.357 Removing: /var/run/dpdk/spdk_pid82641 00:22:27.357 Removing: /var/run/dpdk/spdk_pid83178 00:22:27.357 Removing: /var/run/dpdk/spdk_pid83338 00:22:27.357 Removing: /var/run/dpdk/spdk_pid83493 00:22:27.357 Removing: /var/run/dpdk/spdk_pid83587 00:22:27.357 Removing: /var/run/dpdk/spdk_pid83742 00:22:27.357 Removing: /var/run/dpdk/spdk_pid83851 00:22:27.357 Removing: /var/run/dpdk/spdk_pid84507 00:22:27.357 Removing: /var/run/dpdk/spdk_pid84542 00:22:27.357 Removing: /var/run/dpdk/spdk_pid84577 00:22:27.357 Removing: /var/run/dpdk/spdk_pid84830 00:22:27.357 Removing: /var/run/dpdk/spdk_pid84863 00:22:27.357 Removing: /var/run/dpdk/spdk_pid84897 00:22:27.357 Removing: /var/run/dpdk/spdk_pid85319 00:22:27.357 Removing: /var/run/dpdk/spdk_pid85336 00:22:27.357 Removing: /var/run/dpdk/spdk_pid85591 00:22:27.357 Removing: /var/run/dpdk/spdk_pid85708 00:22:27.357 Removing: /var/run/dpdk/spdk_pid85726 00:22:27.357 Clean 00:22:27.357 12:46:59 -- common/autotest_common.sh@1451 -- # return 0 00:22:27.357 12:46:59 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:27.357 12:46:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:27.357 12:46:59 -- common/autotest_common.sh@10 -- # set +x 00:22:27.357 12:46:59 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:27.357 12:46:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:27.357 12:46:59 -- common/autotest_common.sh@10 -- # set +x 00:22:27.357 12:47:00 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:27.357 12:47:00 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:27.357 12:47:00 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:27.357 12:47:00 -- spdk/autotest.sh@391 -- # hash lcov 00:22:27.357 12:47:00 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:27.357 12:47:00 -- spdk/autotest.sh@393 -- # hostname 00:22:27.357 12:47:00 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:27.616 geninfo: WARNING: invalid characters removed from testname! 00:22:59.719 12:47:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:59.719 12:47:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:01.669 12:47:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:04.197 12:47:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:07.507 12:47:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:10.034 12:47:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:13.340 12:47:45 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:13.340 12:47:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:13.340 12:47:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:13.340 12:47:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.340 12:47:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.340 12:47:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.341 12:47:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.341 12:47:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.341 12:47:45 -- paths/export.sh@5 -- $ export PATH 00:23:13.341 12:47:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.341 12:47:45 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:13.341 12:47:45 -- common/autobuild_common.sh@444 -- $ date +%s 00:23:13.341 12:47:45 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721047665.XXXXXX 00:23:13.341 12:47:45 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721047665.kJhpL2 00:23:13.341 12:47:45 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:23:13.341 12:47:45 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:23:13.341 12:47:45 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:13.341 12:47:45 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:13.341 12:47:45 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:13.341 12:47:45 -- common/autobuild_common.sh@460 -- $ get_config_params 00:23:13.341 12:47:45 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:23:13.341 12:47:45 -- common/autotest_common.sh@10 -- $ set +x 00:23:13.341 12:47:45 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:23:13.341 12:47:45 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:23:13.341 12:47:45 -- pm/common@17 -- $ local monitor 00:23:13.341 12:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:13.341 12:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:13.341 12:47:45 -- pm/common@25 -- $ sleep 1 00:23:13.341 12:47:45 -- pm/common@21 -- $ date +%s 00:23:13.341 12:47:45 -- pm/common@21 -- $ date +%s 00:23:13.341 12:47:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721047665 00:23:13.341 12:47:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721047665 00:23:13.341 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721047665_collect-vmstat.pm.log 00:23:13.341 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721047665_collect-cpu-load.pm.log 00:23:13.926 12:47:46 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:23:13.926 12:47:46 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:13.926 12:47:46 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:13.926 12:47:46 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:13.926 12:47:46 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:13.926 12:47:46 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:13.926 12:47:46 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:13.926 12:47:46 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:13.926 12:47:46 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:13.926 12:47:46 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:13.926 12:47:46 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:13.926 12:47:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:13.926 12:47:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:13.926 12:47:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:13.926 12:47:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:13.926 12:47:46 -- pm/common@44 -- $ pid=87450 00:23:13.926 12:47:46 -- pm/common@50 -- $ kill -TERM 87450 00:23:13.926 12:47:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:13.926 12:47:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:13.926 12:47:46 -- pm/common@44 -- $ pid=87452 00:23:13.926 12:47:46 -- pm/common@50 -- $ kill -TERM 87452 00:23:13.926 + [[ -n 5115 ]] 00:23:13.926 + sudo kill 5115 00:23:13.942 [Pipeline] } 00:23:13.969 [Pipeline] // timeout 00:23:13.979 [Pipeline] } 00:23:14.005 [Pipeline] // stage 00:23:14.015 [Pipeline] } 00:23:14.036 [Pipeline] // catchError 00:23:14.048 [Pipeline] stage 00:23:14.051 [Pipeline] { (Stop VM) 00:23:14.069 [Pipeline] sh 00:23:14.348 + vagrant halt 00:23:18.571 ==> default: Halting domain... 00:23:23.849 [Pipeline] sh 00:23:24.131 + vagrant destroy -f 00:23:28.319 ==> default: Removing domain... 00:23:28.332 [Pipeline] sh 00:23:28.621 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:28.647 [Pipeline] } 00:23:28.669 [Pipeline] // stage 00:23:28.676 [Pipeline] } 00:23:28.696 [Pipeline] // dir 00:23:28.703 [Pipeline] } 00:23:28.723 [Pipeline] // wrap 00:23:28.731 [Pipeline] } 00:23:28.748 [Pipeline] // catchError 00:23:28.759 [Pipeline] stage 00:23:28.762 [Pipeline] { (Epilogue) 00:23:28.780 [Pipeline] sh 00:23:29.061 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:35.638 [Pipeline] catchError 00:23:35.640 [Pipeline] { 00:23:35.656 [Pipeline] sh 00:23:35.931 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:35.931 Artifacts sizes are good 00:23:35.941 [Pipeline] } 00:23:35.961 [Pipeline] // catchError 00:23:35.973 [Pipeline] archiveArtifacts 00:23:35.981 Archiving artifacts 00:23:36.214 [Pipeline] cleanWs 00:23:36.227 [WS-CLEANUP] Deleting project workspace... 00:23:36.227 [WS-CLEANUP] Deferred wipeout is used... 00:23:36.233 [WS-CLEANUP] done 00:23:36.235 [Pipeline] } 00:23:36.253 [Pipeline] // stage 00:23:36.260 [Pipeline] } 00:23:36.274 [Pipeline] // node 00:23:36.280 [Pipeline] End of Pipeline 00:23:36.306 Finished: SUCCESS